I have installed OpenGL debug callback and enabled GL_DEBUG_OUTPUT. The callback looks like this:
void glMsgCallback( GLenum source,
GLenum type,
GLuint id,
GLenum severity,
GLsizei length,
const GLchar* message,
const void* userParam )
{
std::ostringstream os;
os << "GL LOG: type = " << type << ", severity = " << severity << ", message = " << message;
log(os.str());
}
And this is what I get in the log file:
GL LOG: type = 33361, severity = 33387, message = type: 1, local: 0, shared: 0, gpr: 6, inst: 18, bytes: 192
How do I interpret this data? Is there any way to discover which OpenGL function or feature is causing the message? I found out that type 33361 corresponds to GL_DEBUG_TYPE_OTHER, and severity 33387 stands for GL_DEBUG_SEVERITY_NOTIFICATION, but I'm oblivious of the rest.
Is there any way to discover which OpenGL function or feature is causing the message?
The debug callback can be called synchronously or asynchronously. See Debug Output - Getting messages Enable synchronously debug output:
glEnable(GL_DEBUG_OUTPUT_SYNCHRONOUS);
So you can set a breakpoint in the callback of the debug message and see on the call stack which function is causing the problem.
Furthermore the output can be filtered. See How to use glDebugMessageControl. For example, limit the output to error messages:
glDebugMessageControl(
GL_DEBUG_SOURCE_API, GL_DEBUG_TYPE_ERROR, GL_DONT_CARE, 0, NULL, GL_TRUE);
Related
I'm just getting started with OpenCL 1.2 and the C++ Bindings. I want to enqueue a write buffer asynchronous and get a callback once the operation has been completed. Here is a stripped down version of the relevant lines of code:
cl::Event enqueuingBufferReady;
auto error = enqueuingBufferReady.setCallback (CL_COMPLETE, [] (cl_event, cl_int, void*) { std::cout << "Enqueueing complete\n"; });
std::cout << "SetCallback return value: " << MyOpenCLHelpers::getErrorString (error) << std::endl;
// source is a std::vector<int>, buffer is a cl::Buffer of a matching size
commandQueue.enqueueWriteBuffer (buffer, CL_FALSE, 0, sizeof (int) * source.size(), source.data(), NULL, &enqueuingBufferReady);
//... execute the kernel - works successfully!
cl_int info;
enqueuingBufferAReady.getInfo (CL_EVENT_COMMAND_EXECUTION_STATUS, &info);
std::cout << "State of enqueuing " << MyOpenCLHelpers::getEventCommandExecutionStatusString (info) << std::endl;
What works:
The kernel is executed successfully and produces the right results. Enqueuing of the buffer should have worked. The program terminates with a print
State of enqueuing CL_COMPLETE
What does not work:
The setCallback call returns
SetCallback return value: CL_INVALID_EVENT
The callback is never called.
So what's wrong with this piece of code and how could it be changed to work as intended?
In the meanwhile I found it out by myself. My fault was to set the callback before enqueueing the write buffer. The right order is:
cl::Event enqueuingBufferReady;
// source is a std::vector<int>, buffer is a cl::Buffer of a matching size
commandQueue.enqueueWriteBuffer (buffer, CL_FALSE, 0, sizeof (int) * source.size(), source.data(), NULL, &enqueuingBufferReady);
auto error = enqueuingBufferReady.setCallback (CL_COMPLETE, [] (cl_event, cl_int, void*) { std::cout << "Enqueueing complete\n"; });
std::cout << "SetCallback return value: " << MyOpenCLHelpers::getErrorString (error) << std::endl;
Only after the call to enqueueWriteBuffer the passed in cl::Event becomes valid and the subsequent setCallback call works. I was a bit confused on this because I wasn't sure how it was guaranteed that enqueueing the buffer won't have finished before the callback was set, however my test showed that this doesn't matter as the callback is even called if it is set long after the operation was already completed.
I have problems creating a GL 4.x context using plain GLX and GLEW (1.12). All solutions I found here are not working for me.
What I am trying to do (and what I found in other questions): Create a 'base' GL context, call glewInit(), then create the proper GL 4.x context using glXCreateContextAttribsARB(...). This did not work for me, since glXChooseFBConfig keeps segfaulting.
I read that I have to call glxewInit first, but that didn't work. Building without GLEW_MX defined resulted in glxewInit not being available. Building with GLEW_MX defined resulted in the following compile error:
error: 'glxewGetContext' was not declared in this scope #define glxewInit() glxewContextInit(glxewGetContext())
note: in expansion of macro 'glewxInit' auto result = glxewInit()
When I ommit calling glxewInit() the application crashes when calling glXChooseFBConfig(...)
I am rather stuck here. What is the proper way to get a GL 4.x context using plain GLX? (I cannot use glfw or something similiar, I am working on a given application, and I get a Display pointer and a Window id to work with, the window is already there)
Thanks to the hint from Nicol I was able to fix the problem myself. Looking at the glfw code I did the following (might not apply to everyone).
Create a default / old-school GL context using the given Display pointer and Window id. I also call glXQueryExtension to check for proper glX support.
After creating the default context I call glewInit but created function pointers to glXChooseFBConfigSGIX 'by hand'. Don't really know why I had to use the SGIX Extension, didn't bother to take a deeper look.
This way I was able to get a proper FBConfig structure to call glXCreateContextAttribsARB and create a new context after destroying the old one
Et voilá, I get my working GL 4.x context
Thanks againg to Nicol, I don't know why I didn't think of looking at other code.
Here's my solution:
XMapWindow( m_X11View->m_pDisplay, m_X11View->m_hWindow );
XWindowAttributes watt;
XGetWindowAttributes( m_X11View->m_pDisplay, m_X11View->m_hWindow, &watt );
XVisualInfo temp;
temp.visualid = XVisualIDFromVisual(watt.visual);
int n;
XVisualInfo* visual = XGetVisualInfo( m_X11View->m_pDisplay, VisualIDMask, &temp, &n );
int n_elems = 0;
if (glXQueryExtension(m_X11View->m_pDisplay,NULL,NULL))
{
// create dummy base context to init glew, create proper 4.x context
m_X11View->m_GLXContext = glXCreateContext( m_X11View->m_pDisplay, visual, 0, true );
glXMakeCurrent( m_X11View->m_pDisplay, m_X11View->m_hWindow, m_X11View->m_GLXContext );
// some debug output stuff
std::cerr << "GL vendor: " << glGetString(GL_VENDOR) << std::endl;
std::cerr << "GL renderer: " << glGetString(GL_RENDERER) << std::endl;
std::cerr << "GL Version: " << glGetString(GL_VERSION) << std::endl;
std::cerr << "GLSL version: " << glGetString(GL_SHADING_LANGUAGE_VERSION) << std::endl;
int glx_version_major;
int glx_version_minor;
if (glXQueryVersion(m_X11View->m_pDisplay,&glx_version_major,&glx_version_minor))
{
if (ExtensionSupported(m_X11View->m_pDisplay,screen,"GLX_SGIX_fbconfig"))
{
int result = glewInit();
if (GLEW_OK==result)
{
std::cerr << "GLEW init successful" << std::endl;
PFNGLXGETFBCONFIGATTRIBSGIXPROC GetFBConfigAttribSGIX = (PFNGLXGETFBCONFIGATTRIBSGIXPROC)
glXGetProcAddress( (GLubyte*)"glXGetFBConfigAttribSGIX");
PFNGLXCHOOSEFBCONFIGSGIXPROC ChooseFBConfigSGIX = (PFNGLXCHOOSEFBCONFIGSGIXPROC)
glXGetProcAddress( (GLubyte*)"glXChooseFBConfigSGIX");
PFNGLXCREATECONTEXTWITHCONFIGSGIXPROC CreateContextWithConfigSGIX = (PFNGLXCREATECONTEXTWITHCONFIGSGIXPROC)
glXGetProcAddress( (GLubyte*)"glXCreateContextWithConfigSGIX");
PFNGLXGETVISUALFROMFBCONFIGSGIXPROC GetVisualFromFBConfigSGIX = (PFNGLXGETVISUALFROMFBCONFIGSGIXPROC)
glXGetProcAddress( (GLubyte*)"glXGetVisualFromFBConfigSGIX");
int gl_attribs[] = {
GLX_CONTEXT_MAJOR_VERSION_ARB, 4,
GLX_CONTEXT_MINOR_VERSION_ARB, 4,
GLX_CONTEXT_PROFILE_MASK_ARB, GLX_CONTEXT_CORE_PROFILE_BIT_ARB,
//GLX_CONTEXT_FLAGS_ARB, GLX_CONTEXT_DEBUG_BIT_ARB,
0
};
GLXFBConfigSGIX* configs = ChooseFBConfigSGIX( m_X11View->m_pDisplay, screen, NULL, &n_elems );
glXDestroyContext( m_X11View->m_pDisplay, m_X11View->m_GLXContext );
m_X11View->m_GLXContext = glXCreateContextAttribsARB( m_X11View->m_pDisplay, configs[0], 0, true, gl_attribs );
glXMakeCurrent( m_X11View->m_pDisplay, m_X11View->m_hWindow, m_X11View->m_GLXContext );
/*
glDebugMessageCallback( GLDebugLog, NULL );
// setup message control
// disable everything
// enable errors only
glDebugMessageControl( GL_DONT_CARE, GL_DONT_CARE, GL_DONT_CARE, 0, 0, GL_FALSE );
glDebugMessageControl( GL_DEBUG_SOURCE_API, GL_DEBUG_TYPE_ERROR, GL_DONT_CARE,
0, 0, GL_TRUE );
*/
}
else
{
std::cerr << "GLEW init failed: " << glewGetErrorString(result) << std::endl;
}
}
}
}
Driver:
PIO_STACK_LOCATION pIoStackLocation = IoGetCurrentIrpStackLocation(pIrp);
PVOID pBuf = pIrp->AssociatedIrp.SystemBuffer;
switch (pIoStackLocation->Parameters.DeviceIoControl.IoControlCode)
{
case IOCTL_TEST:
DbgPrint("IOCTL IOCTL_TEST.");
DbgPrint("int received : %i", pBuf);
break;
}
User-space App:
int test = 123;
int outputBuffer;
DeviceIoControl(hDevice, IOCTL_SET_PROCESS, &test, sizeof(test), &outputBuffer, sizeof(outputBuffer), &dwBytesRead, NULL);
std::cout << "Output reads as : " << outputBuffer << std::endl;
The user-space application prints out the correct value received back through the output buffer, but in debug view, the value printed out seems to be garbage (ie: "int received : 169642096")
What am I doing wrong?
As said by the previous user, you are printing the address of the variable, not the content.
I strongly suggest you to take a look to the following Driver Development tutorials:
http://www.opferman.com/Tutorials/
I'm trying to step into using shaders with OpenGL, but it seems I can't compile the shader. More frustratingly, it also appears the error log is empty. I've searched through this site extensively and checked many different tutorials but there doesn't seem to be an explanation for why it fails. I even bought a book dealing with shaders but it doesn't account for vanishing log files.
My feeling is that there must be an error with how I am linking the shader source.
//Create shader
GLuint vertObject;
vertObject = glCreateShader(GL_VERTEX_SHADER);
//Stream
ifstream vertfile;
//Try for open - vertex
try
{
//Open
vertfile.open(vertex.c_str());
if(!vertfile)
{
// file couldn't be opened
cout << "Open " << vertex << " failed.";
}
else
{
getline(vertfile, verttext, '\0');
//Test file read worked
cout << verttext;
}
//Close
vertfile.close();
}
catch(std::ifstream::failure e)
{
cout << "Failed to open" << vertfile;
}
//Link source
GLint const shader_length = verttext.size();
GLchar const *shader_source = verttext.c_str();
glShaderSource(vertObject, 1, &shader_source, &shader_length);
//Compile
glCompileShader(vertObject);
//Check for compilation result
GLint isCompiled = 0;
glGetShaderiv(vertObject, GL_COMPILE_STATUS, &isCompiled);
if (!isCompiled)
{
//Did not compile, why?
std::cout << "The shader could not be compiled\n" << std::endl;
char errorlog[500] = {'\0'};
glGetShaderInfoLog(vertObject, 500, 0, errorlog);
string failed(errorlog);
printf("Log size is %d", failed.size());
}
printf("Compiled state = %d", isCompiled);
The shader code is as trivial as can be.
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
I can't get either my fragment or my vertex shader to compile. If I can get the error log to show, then at least I will be able to start error checking my own work. For now, though, I can't even get the errors.
It seems that the reason glCompile is failing without a log in this case is because I attempted to perform this action before the OpenGL context had been initialised (glutCreateWindow etc).
If anybody else gets this problem in future, try getting your version just before you try to create any GLSL objects.
printf("OpenGL version is (%s)\n", glGetString(GL_VERSION));
If you get "OpenGL version is (null)", then you don't have a valid context to create your shaders with. Find where you create your context, and make sure your shader creation comes afterwards.
My application gets data from the network and draws it on the scene (scene uses handmade OpenGL engine).
It works for several hours. When I'm not using my desktop, my monitor, because of Display Power Manager Signaling (dpms) turns off. And then, when I touch the mouse or keyboard, the monitor turns on, and the application hangs up (X hangs up too).
If I do
xset -dmps
the operation system doesn't use dpms and the application works stable.
These issues occur in Centos 6 and Archlinux, but when I run the application under Ubuntu 12.10 it works great!
I tried different NVidia drivers. No effect.
I tried to use ssh to remote login and attach to the process with gdb.
After monitor is turned on I can't find the application in the process table.
How to diagnose the problem? What happens (in OpengGL environment) when the monitor turns off/turns on? Does Ubuntu do something special when using dpms?
We have a guess for reasons of the problem!
When the monitor is turned off we lose the OpenGL context. When the monitor wakes up, the application hangs (no context).
And the difference in behavior depending on the operation system is because of different monitor connections: The monitor for Kubuntu is connected with VGA cable. And so (probably) it has no influence on X behaviour.
Have you tried adding robustness support to your OpenGL implementation using GL_ARB_robustness?
2.6 "Graphics Reset Recovery"
Certain events can result in a reset of the GL context. Such a reset
causes all context state to be lost. Recovery from such events
requires recreation of all objects in the affected context. The
current status of the graphics reset state is returned by
enum GetGraphicsResetStatusARB();
The symbolic constant returned indicates if the GL context has been in
a reset state at any point since the last call to
GetGraphicsResetStatusARB. NO_ERROR indicates that the GL context has
not been in a reset state since the last call.
GUILTY_CONTEXT_RESET_ARB indicates that a reset has been detected that
is attributable to the current GL context. INNOCENT_CONTEXT_RESET_ARB
indicates a reset has been detected that is not attributable to the
current GL context. UNKNOWN_CONTEXT_RESET_ARB indicates a detected
graphics reset whose cause is unknown.
Also, make sure you have a debug context when you initialize your context, and use the ARB_debug_output extension to receive log output.
void DebugMessageControlARB(enum source,
enum type,
enum severity,
sizei count,
const uint* ids,
boolean enabled);
void DebugMessageInsertARB(enum source,
enum type,
uint id,
enum severity,
sizei length,
const char* buf);
void DebugMessageCallbackARB(DEBUGPROCARB callback,
const void* userParam);
uint GetDebugMessageLogARB(uint count,
sizei bufSize,
enum* sources,
enum* types,
uint* ids,
enum* severities,
sizei* lengths,
char* messageLog);
void GetPointerv(enum pname,
void** params);
For example:
// Initialize GL_ARB_debug_output ASAP
if (glfwExtensionSupported("GL_ARB_debug_output"))
{
typedef void APIENTRY (*glDebugMessageCallbackARBFunc)
(GLDEBUGPROCARB callback, const void* userParam);
typedef void APIENTRY (*glDebugMessageControlARBFunc)
(GLenum source, GLenum type, GLenum severity,
GLsizei count, const GLuint* ids, GLboolean enabled);
auto glDebugMessageCallbackARB = (glDebugMessageCallbackARBFunc)
glfwGetProcAddress("glDebugMessageCallbackARB");
auto glDebugMessageControlARB = (glDebugMessageControlARBFunc)
glfwGetProcAddress("glDebugMessageControlARB");
glDebugMessageCallbackARB(debugCallback, this);
glDebugMessageControlARB(GL_DONT_CARE, GL_DONT_CARE,
GL_DEBUG_SEVERITY_LOW_ARB, 0, nullptr, GL_TRUE);
}
...
std::string GlfwThread::severityString(GLenum severity)
{
switch (severity)
{
case GL_DEBUG_SEVERITY_LOW_ARB: return "LOW";
case GL_DEBUG_SEVERITY_MEDIUM_ARB: return "MEDIUM";
case GL_DEBUG_SEVERITY_HIGH_ARB: return "HIGH";
default: return "??";
}
}
std::string GlfwThread::sourceString(GLenum source)
{
switch (source)
{
case GL_DEBUG_SOURCE_API_ARB: return "API";
case GL_DEBUG_SOURCE_WINDOW_SYSTEM_ARB: return "SYSTEM";
case GL_DEBUG_SOURCE_SHADER_COMPILER_ARB: return "SHADER_COMPILER";
case GL_DEBUG_SOURCE_THIRD_PARTY_ARB: return "THIRD_PARTY";
case GL_DEBUG_SOURCE_APPLICATION_ARB: return "APPLICATION";
case GL_DEBUG_SOURCE_OTHER_ARB: return "OTHER";
default: return "???";
}
}
std::string GlfwThread::typeString(GLenum type)
{
switch (type)
{
case GL_DEBUG_TYPE_ERROR_ARB: return "ERROR";
case GL_DEBUG_TYPE_DEPRECATED_BEHAVIOR_ARB: return "DEPRECATED_BEHAVIOR";
case GL_DEBUG_TYPE_UNDEFINED_BEHAVIOR_ARB: return "UNDEFINED_BEHAVIOR";
case GL_DEBUG_TYPE_PORTABILITY_ARB: return "PORTABILITY";
case GL_DEBUG_TYPE_PERFORMANCE_ARB: return "PERFORMANCE";
case GL_DEBUG_TYPE_OTHER_ARB: return "OTHER";
default: return "???";
}
}
// Note: this is static, it is called from OpenGL
void GlfwThread::debugCallback(GLenum source, GLenum type,
GLuint id, GLenum severity,
GLsizei, const GLchar *message,
const GLvoid *)
{
std::cout << "source=" << sourceString(source) <<
" type=" << typeString(type) <<
" id=" << id <<
" severity=" << severityString(severity) <<
" message=" << message <<
std::endl;
AssertBreak(type != GL_DEBUG_TYPE_ERROR_ARB);
}
You almost certainly have both of those extensions available on a decent OpenGL implementation. They help, a lot. Debug contexts do validation on everything and complain to the log. They even give performance suggestions in the log output in some OpenGL implementations. Using ARB_debug_output makes checking glGetError obsolete - it checks for you every call.
You can start by looking at the X's logs, usually located /var/log/, and ~/.xsession-errors.
It's not out of the question that OpenGL is doing something screwy, so if your app has any logging turn it on.
Enable core dumps by running ulimit -c unlimited. You can analyze the dump by opening it in gdb like this:
gdb <executable file> <core dump file>
See if that produces anything useful, then research whatever that is.