glewInit() crashing (segfault) after creating osmesa (off-screen mesa) context - c++

I'm trying to run an opengl application on a remote computing cluster. I'm using osmesa as I intend to execute off-screen software rendering (no x11 forwarding etc). I want to use glew (to make life dealing with shaders and other extension related calls easier), and I seem to have built and linked both mesa and glew fine.
When I call mesa-create-context, glewinit gives a OPENGL Version not available output, which probably means the context has not been created. When I call glGetString(GL_EXTENSIONS) i dont get any output, which confirms this. This also shows that glew is working fine on its own. (Other glew commands like glew version etc also work).
Now when I (as shown below), add the mesa-make-context-current function, glewinit crashes with a segfault. Running glGetString(GL_EXTENSIONS) gives me a list of extensions now however (which means context creation is successful!)
I've spent hours trying to figure this out, tried tinkering but nothing works. Would greatly appreciate any help on this. Maybe some of you has experienced something similar before?? Thanks again!
int Height = 1; int Width = 1;
OSMesaContext ctx; void *buffer;
ctx = OSMesaCreateContext( OSMESA_RGBA, NULL );
buffer = malloc( Width * Height * 4 * sizeof(GLfloat) );
if (!OSMesaMakeCurrent( ctx, buffer, GL_UNSIGNED_BYTE, Width, Height )) {
printf("OSMesaMakeCurrent failed!\n");
return 0;
}
-- glewinit() crashes after this.
Just to add, osmesa and glew actually did not compile initially. Because glew undefines GLAPI in it's last line and since osmesa will not include gl.h again, GLAPI remains undefined and causes an error in osmesa.h (119). I got around this by adding an extern to GLAPI, not sure if this is relevant though.

Looking at the source to glewInit in glew.c if glewContextInit succeeds it returns GLEW_OK, GLEW_OK is defined to 0, and so on Linux systems it will always call glxewContextInit which calls glX functions that in the case of OSMesa will likely not be ready for use. This will cause a segfault (as I see), and it seems that the glewInit function has no capability to handle this case unfortunately without patching the C source and recompiling the library.
If others have already solved this I would be interested, I have seen some patched versions of the glew.c sources that workaround this. It isn't clear if there is any energy in the GLEW community to merge changes in that address this use case.

Related

glad causes glfwSwapBuffers to return error message

Code
#include <glad.h>
#include <glad.c>
#include <GLFW/glfw3.h>
int main()
{
glfwInit();
GLFWwindow* window = glfwCreateWindow(640, 480, "Hello World", NULL, NULL);
while (!glfwWindowShouldClose(window))
{
glClear(GL_COLOR_BUFFER_BIT);
}
return 0;
}
Background information
My Operating System is the latest version of windows 10
I use visual studio v16.6.3 (latest)
I am very new (just learned of openGL today) and am trying to learn how to make graphics with openGL
I use glad and GLFW
Problem
The program is supposed to spawn a blank unresponsive window
Except when running the program, the window gets created but then the glClear command returns error "Exception thrown at 0x0000000000000000 in Project1.exe: 0xC0000005: Access violation executing location 0x0000000000000000."
What I have tried
(Unsuprisingly) commenting out the problem code caused the program to run correctly
I have reinstalled my graphics card drivers
I have tried installing a bunch of different glad combinations on the generator site
I have tried running the program in 32bit and 64bit (using the cooresponding GLFW bit-type)
Commenting out #include <glad.h> and <glad.c> caused the program to function properly
Images
Program Properties:
. C/C++:
. . General: https://i.stack.imgur.com/nirHo.png
. Linker:
. . General: https://i.stack.imgur.com/zHkLT.png
. . Input: https://i.stack.imgur.com/TbEiM.png
Files:
. GLFW: https://i.stack.imgur.com/DTxeX.png
. Glad: https://i.stack.imgur.com/GXNaw.png
The behavior is totally correct. So what GLAD does is having a #define glFoo glad_glFoo for every GL function Foo in the <glad.h>, with glad_glFoo being a function pointer to the appropriate type. A;ll of these function pointers are initialized to NULL.
So when you call glClear(GL_COLOR_BUFFER_BIT) you actually call glad_glClear(GL_COLOR_BUFFER_BIT) which of course ends up as calling address 0:
"Exception thrown at 0x0000000000000000 in Project1.exe: 0xC0000005: Access violation executing location 0x0000000000000000."
Before you can call a GLAD function pointer, you must to all of these things:
have successfully called gladLoadGL while you had a GL context active for the calling thread,
have the same GL context bound to the thread you call your GL function on (or a compatible one, but that is platform-specific)
make sure that the GL function actually is provided by the implementation, for example
by checking for a specific GL version which guarantees the presence of that function
1.b) by requesting a specific required GL version at context creation time and bailing out if it is not met
by checking for the presence of GL extensions which mandate the availability of that function
with GLAD, you could even check for NULL pointers, as by the way GLAD works, it is guaranteed that a non-NULL function pointer implies the presence of the function since GLAD internally basically does 1. and 2. before querying a function pointer. Note that is not true for the underlying GL extension mechanism. You might query a an unsupported function and the implementation still might supply a non-NULL function pointer as a result. Calling that is undefined behavior it the implementation has not advertised the presence of this function via the GL version or extension string(s).
Since glClear is present in every GL version since the beginning, all your code is lacking is a gladLoadGL.

Linking Cuda (cudart.lib) makes DXGI DuplicateOutput1() fail

For an obscure reason my call to IDXGIOutput5::DuplicateOutput1() fail with error 0x887a0004 (DXGI_ERROR_UNSUPPORTED) after I added cudart.lib in my project.
I work on Visual Studio 2019, my code for monitor duplication is the classic :
hr = output5->DuplicateOutput1(this->dxgiDevice, 0, sizeof(supportedFormats) / sizeof(DXGI_FORMAT), supportedFormats, &this->dxgiOutputDuplication);
And the only thing I tried to do with cuda at the moment is simply to list the Cuda devices :
int nDevices = 0;
cudaError_t error = cudaGetDeviceCount(&nDevices);
for (int i = 0; i < nDevices; i++) {
cudaDeviceProp prop;
cudaGetDeviceProperties(&prop, i);
LOG_FUNC_DEBUG("Graphic adapter : Descripion: %s, Memory Clock Rate : %d kHz, Memory Bus Width : %u bits",
prop.name,
prop.memoryClockRate,
prop.memoryBusWidth
);
}
Moreover this piece of code is called far later after I try to start monitor duplication with DXGI.
Every thing seems correct in my application : I do a call to SetProcessDpiAwarenessContext(DPI_AWARENESS_CONTEXT_PER_MONITOR_AWARE_V2), and I'm not running on e discrete GPU (see [https://support.microsoft.com/en-us/help/3019314/error-generated-when-desktop-duplication-api-capable-application-is-ru][1])
And by the way it used to work, and it works again if I just remove the "so simple" Cuda call and the cudart.lib from the linker input !
I really don't understand what can cause this strange behavior, any idea ?
...after I added cudart.lib in my project
When you link CUDA library you force your application to run on discrete GPU. You already know this should be avoided, but you still force it through this link.
...and I'm not running on e discrete GPU...
You are, static link to CUDA is a specific case which hints to use dGPU.
There are systems where Desktop Duplication is not working against dGPU and yours seems to be one of those. Even though unobvious, you are seeing behavior by [NVIDIA] design.
(There are however also other systems where Desktop Duplication is working against dGPU and is not working against iGPU).
Your potential solution is along this line:
Application is not directly linked against cuda.lib or cudart.lib or LoadLibrary to dynamically load the nvcuda.dll or cudart*.dll and uses GetProcAddress to retrieve function addresses from nvcuda.dll or cudart*.dll.

Anyone tried using glMultiDrawArraysIndirect? Compiler can't find the function

Has anyone successfully used glMultiDrawArraysIndirect? I'm including the latest glext.h but compiler can't seem to find the function. Do I need to define something (#define ... ) before including glext.h?
error: ‘GL_DRAW_INDIRECT_BUFFER’ was not declared in this scope
error: ‘glMultiDrawArraysIndirect’ was not declared in this scope
I'm trying to implement OpenGL superBible example. Here are snippets from source code :
GLuint indirect_draw_buffer;
glGenBuffers(1, &indirect_draw_buffer);
glBindBuffer(GL_DRAW_INDIRECT_BUFFER, indirect_draw_buffer);
glBufferData(GL_DRAW_INDIRECT_BUFFER,
NUM_DRAWS * sizeof(DrawArraysIndirectCommand),
draws,
GL_STATIC_DRAW);
....
// fill the buffers
.....
glMultiDrawArraysIndirect (GL_TRIANGLES, NULL, 3, 0);
I'm on Linux with Quadro 2000 & latest drivers installed (NVidia 319.60).
You cannot simply #include <glext.h> and expect this problem to fix itself. This header is only half of the equation, it defines the basic constants, function signatures, typedefs, etc. used by OpenGL extensions but does not actually solve the problem of extending OpenGL.
On most platforms you are guaranteed a certain version of OpenGL (1.1 on Windows) and to use any part of OpenGL that is newer than this version you must extend the API at runtime. Linux is no different, in order to use glMultiDrawArraysIndirect (...) you have to load this extension from the driver at runtime. This usually means setting up function pointers that are NULL until runtime in order to keep the compiler/linker happy.
By far, the simplest solution is going to be to use something like GLEW, which will load all of the extensions your driver supports for versions up to OpenGL 4.4 at runtime. It will take the place of glext.h, all you have to do is initialize the library after you setup your render context.

(Define a macro to) facilitate OpenGL command debugging?

Sometimes it takes a long time of inserting conditional prints and checks to glGetError() to narrow down using a form of binary search where the first function call is that an error is first reported by OpenGL.
I think it would be cool if there is a way to build a macro which I can wrap around all GL calls which may fail which will conditionally call glGetError immediately after. When compiling for a special target I can have it check glGetError with a very high granularity, while compiling for typical release or debug this wouldn't get enabled (I'd check it only once a frame).
Does this make sense to do? Searching for this a bit I find a few people recommending calling glGetError after every non-draw gl-call which is basically the same thing I'm describing.
So in this case is there anything clever that I can do (context: I am using GLEW) to simplify the process of instrumenting my gl calls this way? It would be a significant amount of work at this point to convert my code to wrap a macro around each OpenGL function call. What would be great is if I can do something clever and get all of this going without manually determining which are the sections of code to instrument (though that also has potential advantages... but not really. I really don't care about performance by the time I'm debugging the source of an error).
Try this:
void CheckOpenGLError(const char* stmt, const char* fname, int line)
{
GLenum err = glGetError();
if (err != GL_NO_ERROR)
{
printf("OpenGL error %08x, at %s:%i - for %s\n", err, fname, line, stmt);
abort();
}
}
#ifdef _DEBUG
#define GL_CHECK(stmt) do { \
stmt; \
CheckOpenGLError(#stmt, __FILE__, __LINE__); \
} while (0)
#else
#define GL_CHECK(stmt) stmt
#endif
Use it like this:
GL_CHECK( glBindTexture2D(GL_TEXTURE_2D, id) );
If OpenGL function returns variable, then remember to declare it outside of GL_CHECK:
const char* vendor;
GL_CHECK( vendor = glGetString(GL_VENDOR) );
This way you'll have debug checking if _DEBUG preprocessor symbol is defined, but in "Release" build you'll have just the raw OpenGL calls.
BuGLe sounds like it will do what you want:
Dump a textual log of all GL calls made.
Take a screenshot or capture a video.
Call glGetError after each call to check for errors, and wrap glGetError so that this checking is transparent to your program.
Capture and display statistics (such as frame rate)
Force a wireframe mode
Recover a backtrace from segmentation faults inside the driver, even if the driver is compiled without symbols.

What is the best way to debug OpenGL? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 27 days ago.
The community is reviewing whether to reopen this question as of 27 days ago.
Improve this question
I find that a lot of the time, OpenGL will show you it failed by not drawing anything. I'm trying to find ways to debug OpenGL programs, by inspecting the transformation matrix stack and so on. What is the best way to debug OpenGL? If the code looks and feels like the vertices are in the right place, how can you be sure they are?
There is no straight answer. It all depends on what you are trying to understand. Since OpenGL is a state machine, sometimes it does not do what you expect as the required state is not set or things like that.
In general, use tools like glTrace / glIntercept (to look at the OpenGL call trace), gDebugger (to visualize textures, shaders, OGL state etc.) and paper/pencil :). Sometimes it helps to understand how you have setup the camera and where it is looking, what is being clipped etc. I have personally relied more to the last than the previous two approaches. But when I can argue that the depth is wrong then it helps to look at the trace. gDebugger is also the only tool that can be used effectively for profiling and optimization of your OpenGL app.
Apart from this tool, most of the time it is the math that people get wrong and it can't be understood using any tool. Post on the OpenGL.org newsgroup for code specific comments, you will be never disappointed.
What is the best way to debug OpenGL?
Without considering additional and external tools (which other answers already do).
Then the general way is to extensively call glGetError(). However a better alternative is to use Debug Output (KHR_debug, ARB_debug_output). This provides you with the functionality of setting a callback for messages of varying severity level.
In order to use debug output, the context must be created with the WGL/GLX_DEBUG_CONTEXT_BIT flag. With GLFW this can be set with the GLFW_OPENGL_DEBUG_CONTEXT window hint.
glfwWindowHint(GLFW_OPENGL_DEBUG_CONTEXT, GL_TRUE);
Note that if the context isn't a debug context, then receiving all or even any messages aren't guaranteed.
Whether you have a debug context or not can be detected by checking GL_CONTEXT_FLAGS:
GLint flags;
glGetIntegerv(GL_CONTEXT_FLAGS, &flags);
if (flags & GL_CONTEXT_FLAG_DEBUG_BIT)
// It's a debug context
You would then go ahead and specify a callback:
void debugMessage(GLenum source, GLenum type, GLuint id, GLenum severity, GLsizei length,
const GLchar *message, const void *userParam)
{
// Print, log, whatever based on the enums and message
}
Each possible value for the enums can be seen on here. Especially remember to check the severity, as some messages might just be notifications and not errors.
You can now do ahead and register the callback.
glEnable(GL_DEBUG_OUTPUT);
glEnable(GL_DEBUG_OUTPUT_SYNCHRONOUS);
glDebugMessageCallback(debugMessage, NULL);
glDebugMessageControl(GL_DONT_CARE, GL_DONT_CARE, GL_DONT_CARE, 0, NULL, GL_TRUE);
You can even inject your own messages using glDebugMessageInsert().
glDebugMessageInsert(GL_DEBUG_SOURCE_APPLICATION, GL_DEBUG_TYPE_ERROR, 0,
GL_DEBUG_SEVERITY_NOTIFICATION, -1, "Vary dangerous error");
When it comes to shaders and programs you always want to be checking GL_COMPILE_STATUS, GL_LINK_STATUS and GL_VALIDATE_STATUS. If any of them reflects that something is wrong, then additionally always check glGetShaderInfoLog() / glGetProgramInfoLog().
GLint linkStatus;
glGetProgramiv(program, GL_LINK_STATUS, &linkStatus);
if (!linkStatus)
{
GLchar *infoLog = new GLchar[infoLogLength + 1];
glGetProgramInfoLog(program, infoLogLength * sizeof(GLchar), NULL, infoLog);
...
delete[] infoLog;
}
The string returned by glGetProgramInfoLog() will be null terminated.
You can also go a bit more extreme and utilize a few debug macros in a debug build. Thus using glIs*() functions to check if the expected type is the actual type as well.
assert(glIsProgram(program) == GL_TRUE);
glUseProgram(program);
If debug output isn't available and you just want to use glGetError(), then you're of course free to do so.
GLenum err;
while ((err = glGetError()) != GL_NO_ERROR)
printf("OpenGL Error: %u\n", err);
Since a numeric error code isn't that helpful, we could make it a bit more human readable by mapping the numeric error codes to a message.
const char* glGetErrorString(GLenum error)
{
switch (error)
{
case GL_NO_ERROR: return "No Error";
case GL_INVALID_ENUM: return "Invalid Enum";
case GL_INVALID_VALUE: return "Invalid Value";
case GL_INVALID_OPERATION: return "Invalid Operation";
case GL_INVALID_FRAMEBUFFER_OPERATION: return "Invalid Framebuffer Operation";
case GL_OUT_OF_MEMORY: return "Out of Memory";
case GL_STACK_UNDERFLOW: return "Stack Underflow";
case GL_STACK_OVERFLOW: return "Stack Overflow";
case GL_CONTEXT_LOST: return "Context Lost";
default: return "Unknown Error";
}
}
Then checking it like this:
printf("OpenGL Error: [%u] %s\n", err, glGetErrorString(err));
That still isn't very helpful or better said intuitive, as if you have sprinkled a few glGetError() here and there. Then locating which one logged an error can be troublesome.
Again macros come to the rescue.
void _glCheckErrors(const char *filename, int line)
{
GLenum err;
while ((err = glGetError()) != GL_NO_ERROR)
printf("OpenGL Error: %s (%d) [%u] %s\n", filename, line, err, glGetErrorString(err));
}
Now simply define a macro like this:
#define glCheckErrors() _glCheckErrors(__FILE__, __LINE__)
and voila now you can call glCheckErrors() after everything you want, and in case of errors it will tell you the exact file and line it was detected at.
GLIntercept is your best bet. From their web page:
Save all OpenGL function calls to text or XML format with the option to log individual frames.
Free camera. Fly around the geometry sent to the graphics card and enable/disable wireframe/backface-culling/view frustum render
Save and track display lists.
Saving of the OpenGL frame buffer (color/depth/stencil) pre and post render calls. The ability to save the "diff" of pre and post images is also available.
Apitrace is a relatively new tool from some folks at Valve, but It works great! Give it a try: https://github.com/apitrace/apitrace
I found you can check using glGetError after every line of code your suspect will be wrong, but after do it, the code looks not very clean but it works.
For those on Mac, the buit in OpenGL debugger is great as well. It lets you inspect buffers, states, and helps in finding performance issues.
The gDebugger is an excellent free tool, but no longer supported. However, AMD has picked up its development, and this debugger is now known as CodeXL. It is available both as a standalone application or as a Visual Studio plugin - works both for native C++ applications, or Java/Python applications using OpenGL bindings, both on NVidia and AMD GPUs. It's one hell of a tool.
There is also the free glslDevil: http://www.vis.uni-stuttgart.de/glsldevil/
It allows you to debug glsl shaders extensively. It also shows failed OpenGL calls.
However it's missing features to inspect textures and off screen buffers.
Nsight is good debugging tool if you have an NVidia card.
Download and install RenderDoc.
It will give you a timeline where you can inspect the details of every object.
Use glObjectLabel to give OpenGL objects names. These will show up in RenderDoc.
Enable GL_DEBUG_OUTPUT and use glDebugMessageCallback to install a callback function. Very verbose but you won't miss anything.
Check glGetError at the end of every function scope. This way, you will have a traceback to the source of a OpenGL error in the function scope where the error occurred. Very maintainable. Preferably, use a scope guard.
I don't check errors after each OpenGL call unless there's a good reason for it. For example, if glBindProgramPipeline fails that is a hard stop for me.
You can find out even more about debugging OpenGL here.
Updating the window title dynamically is convenient for me.
Example (use GLFW, C++11):
glfwSetWindowTitle(window, ("Now Time is " + to_string(glfwGetTime())).c_str());