I'm trying to step into using shaders with OpenGL, but it seems I can't compile the shader. More frustratingly, it also appears the error log is empty. I've searched through this site extensively and checked many different tutorials but there doesn't seem to be an explanation for why it fails. I even bought a book dealing with shaders but it doesn't account for vanishing log files.
My feeling is that there must be an error with how I am linking the shader source.
//Create shader
GLuint vertObject;
vertObject = glCreateShader(GL_VERTEX_SHADER);
//Stream
ifstream vertfile;
//Try for open - vertex
try
{
//Open
vertfile.open(vertex.c_str());
if(!vertfile)
{
// file couldn't be opened
cout << "Open " << vertex << " failed.";
}
else
{
getline(vertfile, verttext, '\0');
//Test file read worked
cout << verttext;
}
//Close
vertfile.close();
}
catch(std::ifstream::failure e)
{
cout << "Failed to open" << vertfile;
}
//Link source
GLint const shader_length = verttext.size();
GLchar const *shader_source = verttext.c_str();
glShaderSource(vertObject, 1, &shader_source, &shader_length);
//Compile
glCompileShader(vertObject);
//Check for compilation result
GLint isCompiled = 0;
glGetShaderiv(vertObject, GL_COMPILE_STATUS, &isCompiled);
if (!isCompiled)
{
//Did not compile, why?
std::cout << "The shader could not be compiled\n" << std::endl;
char errorlog[500] = {'\0'};
glGetShaderInfoLog(vertObject, 500, 0, errorlog);
string failed(errorlog);
printf("Log size is %d", failed.size());
}
printf("Compiled state = %d", isCompiled);
The shader code is as trivial as can be.
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
I can't get either my fragment or my vertex shader to compile. If I can get the error log to show, then at least I will be able to start error checking my own work. For now, though, I can't even get the errors.
It seems that the reason glCompile is failing without a log in this case is because I attempted to perform this action before the OpenGL context had been initialised (glutCreateWindow etc).
If anybody else gets this problem in future, try getting your version just before you try to create any GLSL objects.
printf("OpenGL version is (%s)\n", glGetString(GL_VERSION));
If you get "OpenGL version is (null)", then you don't have a valid context to create your shaders with. Find where you create your context, and make sure your shader creation comes afterwards.
Related
I am writing a non-graphical command-line tool which calls some OpenGL functions.
const int DEFAULT_VERSION_MAJOR = 4;
const int DEFAULT_VERSION_MINOR = 3;
const auto DEFAULT_PROFILE = QGLFormat::CoreProfile;
void main ()
{
QGLFormat format;
{
format.setVersion (
DEFAULT_VERSION_MAJOR,
DEFAULT_VERSION_MINOR);
format.setProfile (DEFAULT_PROFILE);
}
QGLContext context (format);
// EDIT: this line is failing.
if (false == context.isValid ())
{
std::cerr << "No valid OpenGL context created." << std::endl;
return EXIT_FAILURE;
}
context.makeCurrent ();
if (const GLenum err = glewInit (); GLEW_OK != err)
{
std::cerr
<< __PRETTY_FUNCTION__
<< ": glewInit() returned "
<< glewGetErrorString (err)
<< std::endl;
}
glEnable (GL_DEBUG_OUTPUT);
// SEGMENTATION FAULT
glDebugMessageCallback ((GLDEBUGPROC) message_callback, nullptr);
I assume this is segfaulting because the libraries are not properly initialized (function pointers not set up or whatever).
The GLEW error is Missing GL version.
This tool will need to create OpenGL objects e.g. compile shaders, but not draw anything.
What are the minimum steps to get OpenGL libraries working for a non-graphical application?
(A cross-platform solution would be nice, a Linux-only solution will be fine.)
I forgot to call QGLContext::create. That (probably) answers this question, and I'll post another question directed at the QGLContext problem.
I'm editing shaders for Minecraft pe on my phone via ssh using a KDE terminal on my computer.
Syntax highlighting works good, but it does not show even simplest errors.
For example in this code below I have at list two errors, mostly lexical
vec4 no`rm`alColor;
void main(){
nomralColor = vec3(1.0,1.0,1.0,1.0);//vec3 != vec4
gl_FragColor = nomralColor;//normalColorl != nomralColor
}
Is there any possible way to check glsl code for simplest (at list lexical) errors using one of available command line code editors (nano/vim/micro)?
I suggest making a little tool you can use to check. The following code just prints out the error in the console window. You could be coding in C++ or Java for Android, I'll give you my C++ example because all the OpenGL calls are the same in Java too.
It's pretty short really, you just use glCreateShader() and glCompileShader() to try to compile it, followed by glGetShaderInfoLog() to give you the error message(s).
//tell openGL to do all the work for us
bool compileShader(GLuint shaderID, std::string shaderCode) {
// Compile Shader
glShaderSource(shaderID, 1, (const char* const*)shaderCode.c_str(), NULL);
glCompileShader(shaderID);
// Check Shader
GLint Result = GL_FALSE;
int InfoLogLength;
glGetShaderiv(shaderID, GL_COMPILE_STATUS, &Result);
glGetShaderiv(shaderID, GL_INFO_LOG_LENGTH, &InfoLogLength);
// Do we have an error message?
if (InfoLogLength > 0){
std::vector<char> shaderErrorMessage(InfoLogLength + 1);
// Get the message text
glGetShaderInfoLog(shaderID, InfoLogLength, NULL, &shaderErrorMessage[0]);
#ifdef DBG_SHADER
printf("Compilation error:\n%s", &shaderErrorMessage[0]);
#endif
return false;
}
return true;
}
and the usage of that function is, given a string of code called FragmentShaderCode
GLuint FragmentShaderID = glCreateShader(GL_FRAGMENT_SHADER);
bool succeeded = compileShader(FragmentShaderID, FragmentShaderCode);
I'm following this tutorial, and now I'm trying to create a window surface. Here's my createSurface():
void createSurface() {
int result = glfwCreateWindowSurface(instance, window, nullptr, &surface);
if(result != VK_SUCCESS) {
const char* description;
int code = glfwGetError(&description);
cout << description << endl;
throw runtime_error("Window surface creation failed");
}
}
When I run the program, I get this in the end:
X11: Vulkan instance missing VK_KHR_xcb_surface extension
terminate called after throwing an instance of 'std::runtime_error'
what(): Window surface creation failed
Aborted (core dumped)
Seems like my instance is just missing the VK_KHR_xcb_surface extension, or is it?
Here's the final part of my createInstance(). This part deals with instance extensions and creates the instance in the end:
// Extensions required by GLFW
uint32_t glfwExtensionsCount = 0;
const char** glfwExtensions;
glfwExtensions = glfwGetRequiredInstanceExtensions(&glfwExtensionsCount);
instanceCreateInfo.enabledExtensionCount = glfwExtensionsCount;
instanceCreateInfo.ppEnabledExtensionNames = glfwExtensions;
// List enabled extensions in stdout
cout << "Enabled extensions" << endl;
for(int i = 0; i < instanceCreateInfo.enabledExtensionCount; i++) {
cout << instanceCreateInfo.ppEnabledExtensionNames[i] << endl;
}
// Create instance
if(vkCreateInstance(&instanceCreateInfo, nullptr, &instance) != VK_SUCCESS) {
runtime_error("Vulkan instance creation failed");
}
This is what it prints out:
Enabled extensions
VK_KHR_surface
VK_KHR_xcb_surface
So VK_KHR_xcb_surface should be enabled, yet GLFW says otherwise at surface creation. What is wrong here?
I also tried with Wayland, but it only changes VK_KHR_xcb_surface into VK_KHR_wayland_surface.
EDIT: This is the output of vulkaninfo.
I'm having an issue whenever I call glGenBuffers on windows. Whenever I call glGenBuffers, or any 3.2 or above functions, OpenGL returns an INVALID_OPERATION error. After reading around the web this is probably being caused by not having the updated function pointer for 3.2 on windows. From everything I have read you must acquire the function pointers at runtime by asking the windows API wglGetProcAddress to return the function pointer for use in your program and your current driver. This in itself isn't hard, but why reinvent the wheel. Instead I choose to include GLEW to handle the function pointers for me.
Here is a small program to demonstrate my issue.
#define GLEW_STATIC
#include <GL\glew.h>
#include <GLFW\glfw3.h>
#include <iostream>
void PrintError(void)
{
// Print glGetError()
}
int main()
{
glfwInit();
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT,GL_TRUE);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
GLFWwindow* window = glfwCreateWindow(800, 600, "OpenGL", nullptr, nullptr);
glfwMakeContextCurrent(window);
PrintError();
// Initalize GLEW
glewExperimental = GL_TRUE;
GLenum error = glewInit();
if(error!= GLEW_OK)
std::cerr << glewGetErrorString(error) << std::endl;
// Glew information
if(GLEW_VERSION_3_2)
std::cout << "Supports 3.2.." << std::endl;
// Try buffer
GLuint buffer;
glGenBuffers(1,&buffer); // INVALID_OPERATION
PrintError(); // Error shows up
// This is our main loop
while(!glfwWindowShouldClose(window))
{
glfwSwapBuffers(window);
glfwPollEvents();
}
glfwTerminate();
return 0;
}
When I run the following code I get an OpenGL windows. However, in my console I see that the first call to PrintError returned NO_ERROR while the second call returned INVALID_OPERATION. I have a feeling I'm overlooking some small fact, but I just can't seem to locate it on glfw or GLEWs webpage.
I'm currently running:
glfw-3.0.4 (32bit)
glew-1.10.0 (32bit)
**** Update ****
In response to glampert post I added the following code after the glewInit() method.
GLenum loop_error = glGetError();
while (loop_error != GL_NO_ERROR)
{
switch (loop_error)
{
case GL_NO_ERROR: std::cout << "GL_NO_ERROR" << std::endl; break;
case GL_INVALID_ENUM: std::cout << "GL_INVALID_ENUM" << std::endl; break;
case GL_INVALID_VALUE: std::cout << "GL_INVALID_VALUE" << std::endl; break;
case GL_INVALID_OPERATION: std::cout << "GL_INVALID_OPERATION" << std::endl; break;
case GL_INVALID_FRAMEBUFFER_OPERATION: std::cout << "GL_INVALID_FRAMEBUFFER_OPERATION" << std::endl; break;
case GL_OUT_OF_MEMORY: std::cout << "GL_OUT_OF_MEMORY" << std::endl; break;
}
loop_error = glGetError();
}
Which confirms his assumption that the invalid operation is caused by the glewinit code.
* Update *
It looks like this is a known issue with GLEW & 3.2 context.
http://www.opengl.org/wiki/OpenGL_Loading_Library
After locating GLEW as the trouble maker I located the following two post.
OpenGL: glGetError() returns invalid enum after call to glewInit()
Seems like the suggested solution is to set the experimental flag, which I'm already doing. However, the website mentions that even after doing so their is still the possibility it will cause an invalid operation and fail to grab the function pointers.
I think at this point my best solution is to just grab my own function pointers.
It could very well be that the error you are getting is a residual error from GLFW or GLEW.
Try adding this after the library init code:
GLenum error;
while ((error = glGetError()) != GL_NO_ERROR)
{
// print(error), etc...
}
and before you attempt to call any OpenGL function, to clear the error cache.
If the error is indeed caused by GLEW, it may not necessarily be a dangerous one that stops the library from working. So I wouldn't drop the library just because of this issue, which will eventually be fixed. However, if you do decide to fetch the function pointers yourself, GLFW provides the glfwGetProcAddress function, which will allow you to do so in a portable manner.
I must admit this is my first time implementing shaders, previously I have only worked with fixed function pipeline; however, though I am certain that everything I did is correct - there must be an error.
glLinkProgram(program) - returns GL_FALSE when queried for GL_LINK_STATUS. In addition, the info log is empty (when I query the log length - it is 1, which is the null terminator per the docs, it checks out). So linker errors, and no logs. In addition, I had just discovered that the linker problems occur as soon as I make any use of gl_Position variable in vertex shader, both during assignment and if I use it for calculations. I tried all sorts of shader variations, it errors but it fails to produce the logs - it just seems to return GL_FALSE any time gl_Position is touched. Interestingly enough, fragment shader doesn't cause any problems.
Both fragment and vertex shaders compile fine with no errors. When I introduce syntax errors, they are detected, printed, and the process is aborted prior to program creation (so it seems to work fine). I debugged and made sure that files get loaded properly, source is null terminated, sizes are correct, I checked the number of attached programs after attachment and it is 2 (correct lol). It is removed for clarity, I have the checkForErrors() method that checks and prints opengl errors - none are detected.
I am stumped, please someone help! I've been losing sleep over this for 2 days now...
This is the code to load the shader:
FILE *file = fopen(fileName.c_str(), "rb");
if(file == NULL)
{
Log::error("file does not exist: \"" + fileName + "\"");
return NULL;
}
// Calculate file size
fseek(file , 0 , SEEK_END);
int size = ftell(file);
rewind(file);
// If file size is 0
if(size == 0)
{
Log::error("file is empty: \"" + fileName + "\"");
return NULL;
}
char **source = new char*[1];
source[0] = new char[size+1];
source[0][size] = '\0';
// If we weren't able to read the entire file into memory
int readSize = fread(source[0], sizeof(char), size, file);
if(size != readSize)
{
int fileError = ferror(file);
// Free the source, log an error and return false
delete source[0];
delete [] source;
fclose(file);
Log::error("couldn't load file into memory [ferror(" + toString<int>(fileError) + ")]: \"" + fileName + "\" (Size: " + toString<int>(readSize) + "/" + toString<int>(size) + " bytes)");
return NULL;
}
// Close the file
fclose(file);
// Create the shader object
// shaderType is GLenum that is set based on the file extension. I assure you it is correctly set to either GL_VERTEX_SHADER or GL_FRAGMENT_SHADER
GLuint shaderID = glCreateShader(shaderType);
// If we could not create the shader object, check for OpenGL errors and return false
if(shaderID == 0)
{
checkForErrors();
Log::error("error creating shader \"" + name);
delete source[0];
delete [] source;
return NULL;
}
// Load shader source and compile it
glShaderSource(shaderID, 1, (const GLchar**)source, NULL);
glCompileShader(shaderID);
GLint success;
glGetShaderiv(shaderID, GL_COMPILE_STATUS, &success);
if(!success)
{
GLchar error[1024];
glGetShaderInfoLog(shaderID, 1024, NULL, error);
Log::error("error compiling shader \"" + name + "\"\n Log:\n" + error);
delete source[0];
delete [] source;
return NULL;
}
else
{
Log::debug("success! Loaded shader \"" + name + "\"");
}
// Clean up
delete source[0];
delete [] source;
Quick note: glShaderSource - I cast to (const GLchar**) because GCC complains about const to non-const char pointers; Otherwise, I believe I'm entirely compliant.
No errors there, by the way. The shaders (below) compile without any errors.
Vertex Shader:
void main()
{
// If I comment the line below, the link status is GL_TRUE.
gl_Position = vec4(1.0, 1.0, 1.0, 1.0);
}
Fragment Shader:
void main()
{
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
Below is the code that creates the shader program, attaches objects and links etc.:
// Create a new shader program
GLuint program = glCreateProgram();
if(program == 0)
{
Log::error("RenderSystem::loadShaderProgram() - could not create OpenGL shader program; This one is fatal, sorry.");
getContext()->fatalErrorIn("RenderSystem::loadShaderProgram(shader1, shader2)", "OpenGL failed to create object using glCreateProgram()");
return NULL;
}
// Attach shader objects to program
glAttachShader(program, vertexShaderID);
glAttachShader(program, fragmentShaderID);
// Link the program
GLint success = GL_FALSE;
glLinkProgram(program);
checkForErrors();
glGetProgramiv(program, GL_LINK_STATUS, &success);
if(success == GL_FALSE)
{
GLchar errorLog[1024] = {0};
glGetProgramInfoLog(program, 1024, NULL, errorLog);
Log::error(std::string() + "error linking program: " + errorLog);
return NULL;
}
success = GL_FALSE;
glValidateProgram(program);
glGetProgramiv(program, GL_VALIDATE_STATUS, &success);
if(success == GL_FALSE)
{
GLchar errorLog[1024] = {0};
glGetProgramInfoLog(program, 1024, NULL, errorLog);
checkForErrors();
Log::error(std::string() + "error validating shader program; Details: " + errorLog);
return NULL;
}
Usually it doesn't even get as far as validating the program... I'm so frustrated with this, it's very hard not to be vulgar.
Your help is needed and any assistance is genuinely appreciated!
EDIT: I'm running everything on Intel HD 3000 (with support for OpenGL up to 3.1). My target OpenGL version is 2.0.
EDIT2: I would also like to note that I had some issues with reading the shaders from text files if I set the "r" or "rt" flags in fopen - the read size was smaller than actual size by about 10 bytes (consistently on all files) - and feof() would return true. When I switched to reading in binary ("rb") the problem went away and files were read in fully. I did try several alternative implementations and they all produced the same error during linking (and I print shader source to console right after reading the file to ensure it looks correct, it does.).
OK, I knew when I was posting that this was going to be embarrassing, but it is quite bad: the Intel graphics config application that usually accompanies an Intel driver has 3D settings tab; Now, the "custom settings" check box was unchecked, but the grayed out option under the setting "Vertex Processing" spelled out "Enable Software Processing". Even though it was unchecked and everything was grayed out I simply entered the custom settings and checked everything to "Application Settings".
That fixed it! Everything links like it should! Don't know who thought of that option, why would that ever be useful?! Its the only option that looks stunningly out of place.
I went through several driver re-installs, intense debugging and configuration research, I had to speculate and second guess on absolutely everything! Terrible, wouldn't wish it on my worst enemies.