OpenGL anisotropic filtering support, contradictory check results - opengl

When checking if anisotropic filtering is supported, I get contradictory results.
if(glewIsSupported("GL_EXT_texture_filter_anisotropic") || GLEW_EXT_texture_filter_anisotropic) {
std::cout << "support anisotropic" << std::endl;
}
GLfloat max;
glGetFloatv(GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT, &max);
std::cout << max << std::endl;
The output for this section on my machine is:
16
So seemingly an anisotropic filtering of 16 is supported, but glewIsSupported as well as the glew extension string say the opposite.
Is checking for GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT enough and is the glew check wrong, or is something different going on?

Apparently there is a known bug in glew where glGetString(GL_EXTENSIONS) is used even in an OpenGL 3+ context instead of glGetStringi that replaced the extension querying in OpenGL 3+.
So until patched, extension querying must be done manually.

A possible way to solve the chicken and egg problem is to call glGetString(GL_EXTENSIONS) and check glGetError() for GL_INVALID_ENUM. This should only be raised in case GL_EXTENSIONS is not available. If you encounter this error, try glGetStringi. Don't forget to check the errors here, too. GLEW doesn't (as of version 1.10 :/ ).

Related

glewInit() crashing (segfault) after creating osmesa (off-screen mesa) context

I'm trying to run an opengl application on a remote computing cluster. I'm using osmesa as I intend to execute off-screen software rendering (no x11 forwarding etc). I want to use glew (to make life dealing with shaders and other extension related calls easier), and I seem to have built and linked both mesa and glew fine.
When I call mesa-create-context, glewinit gives a OPENGL Version not available output, which probably means the context has not been created. When I call glGetString(GL_EXTENSIONS) i dont get any output, which confirms this. This also shows that glew is working fine on its own. (Other glew commands like glew version etc also work).
Now when I (as shown below), add the mesa-make-context-current function, glewinit crashes with a segfault. Running glGetString(GL_EXTENSIONS) gives me a list of extensions now however (which means context creation is successful!)
I've spent hours trying to figure this out, tried tinkering but nothing works. Would greatly appreciate any help on this. Maybe some of you has experienced something similar before?? Thanks again!
int Height = 1; int Width = 1;
OSMesaContext ctx; void *buffer;
ctx = OSMesaCreateContext( OSMESA_RGBA, NULL );
buffer = malloc( Width * Height * 4 * sizeof(GLfloat) );
if (!OSMesaMakeCurrent( ctx, buffer, GL_UNSIGNED_BYTE, Width, Height )) {
printf("OSMesaMakeCurrent failed!\n");
return 0;
}
-- glewinit() crashes after this.
Just to add, osmesa and glew actually did not compile initially. Because glew undefines GLAPI in it's last line and since osmesa will not include gl.h again, GLAPI remains undefined and causes an error in osmesa.h (119). I got around this by adding an extern to GLAPI, not sure if this is relevant though.
Looking at the source to glewInit in glew.c if glewContextInit succeeds it returns GLEW_OK, GLEW_OK is defined to 0, and so on Linux systems it will always call glxewContextInit which calls glX functions that in the case of OSMesa will likely not be ready for use. This will cause a segfault (as I see), and it seems that the glewInit function has no capability to handle this case unfortunately without patching the C source and recompiling the library.
If others have already solved this I would be interested, I have seen some patched versions of the glew.c sources that workaround this. It isn't clear if there is any energy in the GLEW community to merge changes in that address this use case.

Segfault with glPrimitiveRestartIndex

I'm having trouble with glPrimitiveRestartIndex
My code compiles and links, but when I run it, it segfaults on the line:
glPrimitiveRestartIndex(0xffff);
glEnable(GL_PRIMITIVE_RESTART) gives me invalid enumerant when i poll with glGetError and glGetErrorString
I found a similar question, and it suggested that glew might not be initialized properly.
I'm initializing glew before I do this, and I'm also including the glew.h before gl.h.
Also, glewinfo | grep Restart gives me
glPrimitiveRestartIndex: OK
glPrimitiveRestartIndexNV: OK
glPrimitiveRestartNV: OK
So shouldn't it work on my system? What could be wrong?
My code is fairly big, so I can't post everything, here is what I think is relevant:
if (GLEW_OK != glewInit()){
// GLEW failed!
std::cout << "Failed to initialize glew!\n";
exit(1);
}
glEnable(GL_PRIMITIVE_RESTART); //invalid enumerator
glPrimitiveRestartIndex(0xffff); //segfault!
glPrimitiveRestartIndex
Is OpenGL 3.1 +
You should check to see if your driver supports it, by checking it like so:
if(GLEW_VERSION_3_1) {
//we are running on 3.1 +
} else {
//some version lower than 3.1
}

glBindFramebuffer causes an "invalid operation" GL error when using the GL_DRAW_FRAMEBUFFER target

I'm using OpenGL 3.3 on a GeForce 9800 GTX. The reference pages for 3.3 say that an invalid operation with glBindFramebuffer indicates a framebuffer ID that was not returned from glGenFramebuffers. Yet, I output the ID returned by glGenFramebuffers and the ID I send later to glBindFramebuffer and they are the same.
The GL error goes away, however, when I change the target parameter in glBindFramebuffer from GL_DRAW_FRAMEBUFFER to GL_FRAMEBUFFER. The documentation says I should be able to use GL_DRAW_FRAMEBUFFER. Is there any case in which you can't bind to GL_DRAW_FRAMEBUFFER? Is there any harm from using GL_FRAMEBUFFER instead of GL_DRAW_FRAMEBUFFER? Is this a symptom of a larger problem?
If glBindFramebuffer(GL_FRAMEBUFFER) works when glBindFramebuffer(GL_DRAW_FRAMEBUFFER) does not, and we're not talking about the EXT version of these functions and enums (note the lack of "EXT" suffixes), then it's likely that you may have done something wrong. GL_INVALID_OPERATION is the error you get when multiple combinations of parameters that depend on different state are in conflict. If it were just a missing enum, you should get GL_INVALID_ENUM.
Of course, it could just be a driver bug too. But there's no way to know without knowing what your code looks like.

glGetIntegerv returning garbage value

#include<iostream>
#include"Glew\glew.h"
#include"freeGlut\freeglut.h"
using namespace std;
int main(int argc, char* argv[])
{
GLint ExtensionCount;
glGetIntegerv(GL_NUM_EXTENSIONS, &ExtensionCount);
cout << ExtensionCount << endl;
return 0;
}
The output of this program is, -858993460. Why? It should return the number of extensions supported.
If I remove the freeglut.h header file, the program doesn't run and throws an error message,
error LNK2019: unresolved external symbol __imp__glGetIntegerv#8 referenced in function _main
But, glGetIntegerv is inside glew.h. Why removing freeglut.h would cause an unresolved external error?
EDIT I have OpenGL 3.3 support. Using Radeon 4670 with catalyst 11.6.
#mario & #Banthar yes, thanks. I have to create a context first to use the any Opengl functionality.(yes, even for Opengl 1.1 which comes default with windows.)
glGetIntegerv is not returning garbage. glGetIntegerv returns either a good value, or it does not touch the pointed to address at all. The reason why you see garbage is because the variable is not initialized. This seems like a pedantic comment, but it is actually important to know that glGetIntegerv does not touch the variable if it fails. Thanks #Damon
This bare bone works fine.
int main(int argc, char* argv[])
{
glutInit(&argc, argv);
glutInitContextVersion(3,3);
glutInitContextProfile(GLUT_FORWARD_COMPATIBLE);
glutInitContextProfile(GLUT_CORE_PROFILE);
glutCreateWindow("Test");
GLint ExtensionCount;
glGetIntegerv(GL_NUM_EXTENSIONS, &ExtensionCount);
cout << ExtensionCount << endl;
return 0;
}
Are you sure you have opengl 3.0? AFAIK, GL_NUM_EXTENSIONS was added in OpenGL 3.0.
I guess your rendering context is using a OpenGL version prior to 3.0 (from what I've read GL_NUM_EXTENSIONS was introduced in OpenGL 3.0; just because your card supports it doesn't mean you're actually using it). You could retrieve the string GL_EXTENSIONS and then split/count the elements yourself. But I don't think that's available everywhere either (2.0+?).
What are you trying to do (besides returning the number of extensions)?
Maybe your library headers expect you to include <GL/gl.h>
In my Windows SDK (7.1) the included GL/GL.h defines the symbol GL_VERSION_1_1. I suspect that this is the version that is really relevant for the purposes of using glGetIntegerv with arguments such as GL_MAJOR_VERSION, GL_MINOR_VERSION or GL_NUM_EXTENSIONS.
Actually, none of these is defined in GL/GL.h, while for instance GL_VERSION and GL_EXTENSIONS are. But when including GL/glew.h all these constants are available.
With respect to GL_VERSION_1_1, the three constants GL_MAJOR_VERSION, GL_MINOR_VERSION or GL_NUM_EXTENSIONS are not valid enumeration values, and actually if you call glGetError after trying to use one of them with glGetIntegerv you get an error 0x500 (1280 in decimal) which is the GL_INVALID_ENUM error.

What is the best way to debug OpenGL? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 27 days ago.
The community is reviewing whether to reopen this question as of 27 days ago.
Improve this question
I find that a lot of the time, OpenGL will show you it failed by not drawing anything. I'm trying to find ways to debug OpenGL programs, by inspecting the transformation matrix stack and so on. What is the best way to debug OpenGL? If the code looks and feels like the vertices are in the right place, how can you be sure they are?
There is no straight answer. It all depends on what you are trying to understand. Since OpenGL is a state machine, sometimes it does not do what you expect as the required state is not set or things like that.
In general, use tools like glTrace / glIntercept (to look at the OpenGL call trace), gDebugger (to visualize textures, shaders, OGL state etc.) and paper/pencil :). Sometimes it helps to understand how you have setup the camera and where it is looking, what is being clipped etc. I have personally relied more to the last than the previous two approaches. But when I can argue that the depth is wrong then it helps to look at the trace. gDebugger is also the only tool that can be used effectively for profiling and optimization of your OpenGL app.
Apart from this tool, most of the time it is the math that people get wrong and it can't be understood using any tool. Post on the OpenGL.org newsgroup for code specific comments, you will be never disappointed.
What is the best way to debug OpenGL?
Without considering additional and external tools (which other answers already do).
Then the general way is to extensively call glGetError(). However a better alternative is to use Debug Output (KHR_debug, ARB_debug_output). This provides you with the functionality of setting a callback for messages of varying severity level.
In order to use debug output, the context must be created with the WGL/GLX_DEBUG_CONTEXT_BIT flag. With GLFW this can be set with the GLFW_OPENGL_DEBUG_CONTEXT window hint.
glfwWindowHint(GLFW_OPENGL_DEBUG_CONTEXT, GL_TRUE);
Note that if the context isn't a debug context, then receiving all or even any messages aren't guaranteed.
Whether you have a debug context or not can be detected by checking GL_CONTEXT_FLAGS:
GLint flags;
glGetIntegerv(GL_CONTEXT_FLAGS, &flags);
if (flags & GL_CONTEXT_FLAG_DEBUG_BIT)
// It's a debug context
You would then go ahead and specify a callback:
void debugMessage(GLenum source, GLenum type, GLuint id, GLenum severity, GLsizei length,
const GLchar *message, const void *userParam)
{
// Print, log, whatever based on the enums and message
}
Each possible value for the enums can be seen on here. Especially remember to check the severity, as some messages might just be notifications and not errors.
You can now do ahead and register the callback.
glEnable(GL_DEBUG_OUTPUT);
glEnable(GL_DEBUG_OUTPUT_SYNCHRONOUS);
glDebugMessageCallback(debugMessage, NULL);
glDebugMessageControl(GL_DONT_CARE, GL_DONT_CARE, GL_DONT_CARE, 0, NULL, GL_TRUE);
You can even inject your own messages using glDebugMessageInsert().
glDebugMessageInsert(GL_DEBUG_SOURCE_APPLICATION, GL_DEBUG_TYPE_ERROR, 0,
GL_DEBUG_SEVERITY_NOTIFICATION, -1, "Vary dangerous error");
When it comes to shaders and programs you always want to be checking GL_COMPILE_STATUS, GL_LINK_STATUS and GL_VALIDATE_STATUS. If any of them reflects that something is wrong, then additionally always check glGetShaderInfoLog() / glGetProgramInfoLog().
GLint linkStatus;
glGetProgramiv(program, GL_LINK_STATUS, &linkStatus);
if (!linkStatus)
{
GLchar *infoLog = new GLchar[infoLogLength + 1];
glGetProgramInfoLog(program, infoLogLength * sizeof(GLchar), NULL, infoLog);
...
delete[] infoLog;
}
The string returned by glGetProgramInfoLog() will be null terminated.
You can also go a bit more extreme and utilize a few debug macros in a debug build. Thus using glIs*() functions to check if the expected type is the actual type as well.
assert(glIsProgram(program) == GL_TRUE);
glUseProgram(program);
If debug output isn't available and you just want to use glGetError(), then you're of course free to do so.
GLenum err;
while ((err = glGetError()) != GL_NO_ERROR)
printf("OpenGL Error: %u\n", err);
Since a numeric error code isn't that helpful, we could make it a bit more human readable by mapping the numeric error codes to a message.
const char* glGetErrorString(GLenum error)
{
switch (error)
{
case GL_NO_ERROR: return "No Error";
case GL_INVALID_ENUM: return "Invalid Enum";
case GL_INVALID_VALUE: return "Invalid Value";
case GL_INVALID_OPERATION: return "Invalid Operation";
case GL_INVALID_FRAMEBUFFER_OPERATION: return "Invalid Framebuffer Operation";
case GL_OUT_OF_MEMORY: return "Out of Memory";
case GL_STACK_UNDERFLOW: return "Stack Underflow";
case GL_STACK_OVERFLOW: return "Stack Overflow";
case GL_CONTEXT_LOST: return "Context Lost";
default: return "Unknown Error";
}
}
Then checking it like this:
printf("OpenGL Error: [%u] %s\n", err, glGetErrorString(err));
That still isn't very helpful or better said intuitive, as if you have sprinkled a few glGetError() here and there. Then locating which one logged an error can be troublesome.
Again macros come to the rescue.
void _glCheckErrors(const char *filename, int line)
{
GLenum err;
while ((err = glGetError()) != GL_NO_ERROR)
printf("OpenGL Error: %s (%d) [%u] %s\n", filename, line, err, glGetErrorString(err));
}
Now simply define a macro like this:
#define glCheckErrors() _glCheckErrors(__FILE__, __LINE__)
and voila now you can call glCheckErrors() after everything you want, and in case of errors it will tell you the exact file and line it was detected at.
GLIntercept is your best bet. From their web page:
Save all OpenGL function calls to text or XML format with the option to log individual frames.
Free camera. Fly around the geometry sent to the graphics card and enable/disable wireframe/backface-culling/view frustum render
Save and track display lists.
Saving of the OpenGL frame buffer (color/depth/stencil) pre and post render calls. The ability to save the "diff" of pre and post images is also available.
Apitrace is a relatively new tool from some folks at Valve, but It works great! Give it a try: https://github.com/apitrace/apitrace
I found you can check using glGetError after every line of code your suspect will be wrong, but after do it, the code looks not very clean but it works.
For those on Mac, the buit in OpenGL debugger is great as well. It lets you inspect buffers, states, and helps in finding performance issues.
The gDebugger is an excellent free tool, but no longer supported. However, AMD has picked up its development, and this debugger is now known as CodeXL. It is available both as a standalone application or as a Visual Studio plugin - works both for native C++ applications, or Java/Python applications using OpenGL bindings, both on NVidia and AMD GPUs. It's one hell of a tool.
There is also the free glslDevil: http://www.vis.uni-stuttgart.de/glsldevil/
It allows you to debug glsl shaders extensively. It also shows failed OpenGL calls.
However it's missing features to inspect textures and off screen buffers.
Nsight is good debugging tool if you have an NVidia card.
Download and install RenderDoc.
It will give you a timeline where you can inspect the details of every object.
Use glObjectLabel to give OpenGL objects names. These will show up in RenderDoc.
Enable GL_DEBUG_OUTPUT and use glDebugMessageCallback to install a callback function. Very verbose but you won't miss anything.
Check glGetError at the end of every function scope. This way, you will have a traceback to the source of a OpenGL error in the function scope where the error occurred. Very maintainable. Preferably, use a scope guard.
I don't check errors after each OpenGL call unless there's a good reason for it. For example, if glBindProgramPipeline fails that is a hard stop for me.
You can find out even more about debugging OpenGL here.
Updating the window title dynamically is convenient for me.
Example (use GLFW, C++11):
glfwSetWindowTitle(window, ("Now Time is " + to_string(glfwGetTime())).c_str());