I have problems creating a GL 4.x context using plain GLX and GLEW (1.12). All solutions I found here are not working for me.
What I am trying to do (and what I found in other questions): Create a 'base' GL context, call glewInit(), then create the proper GL 4.x context using glXCreateContextAttribsARB(...). This did not work for me, since glXChooseFBConfig keeps segfaulting.
I read that I have to call glxewInit first, but that didn't work. Building without GLEW_MX defined resulted in glxewInit not being available. Building with GLEW_MX defined resulted in the following compile error:
error: 'glxewGetContext' was not declared in this scope #define glxewInit() glxewContextInit(glxewGetContext())
note: in expansion of macro 'glewxInit' auto result = glxewInit()
When I ommit calling glxewInit() the application crashes when calling glXChooseFBConfig(...)
I am rather stuck here. What is the proper way to get a GL 4.x context using plain GLX? (I cannot use glfw or something similiar, I am working on a given application, and I get a Display pointer and a Window id to work with, the window is already there)
Thanks to the hint from Nicol I was able to fix the problem myself. Looking at the glfw code I did the following (might not apply to everyone).
Create a default / old-school GL context using the given Display pointer and Window id. I also call glXQueryExtension to check for proper glX support.
After creating the default context I call glewInit but created function pointers to glXChooseFBConfigSGIX 'by hand'. Don't really know why I had to use the SGIX Extension, didn't bother to take a deeper look.
This way I was able to get a proper FBConfig structure to call glXCreateContextAttribsARB and create a new context after destroying the old one
Et voilá, I get my working GL 4.x context
Thanks againg to Nicol, I don't know why I didn't think of looking at other code.
Here's my solution:
XMapWindow( m_X11View->m_pDisplay, m_X11View->m_hWindow );
XWindowAttributes watt;
XGetWindowAttributes( m_X11View->m_pDisplay, m_X11View->m_hWindow, &watt );
XVisualInfo temp;
temp.visualid = XVisualIDFromVisual(watt.visual);
int n;
XVisualInfo* visual = XGetVisualInfo( m_X11View->m_pDisplay, VisualIDMask, &temp, &n );
int n_elems = 0;
if (glXQueryExtension(m_X11View->m_pDisplay,NULL,NULL))
{
// create dummy base context to init glew, create proper 4.x context
m_X11View->m_GLXContext = glXCreateContext( m_X11View->m_pDisplay, visual, 0, true );
glXMakeCurrent( m_X11View->m_pDisplay, m_X11View->m_hWindow, m_X11View->m_GLXContext );
// some debug output stuff
std::cerr << "GL vendor: " << glGetString(GL_VENDOR) << std::endl;
std::cerr << "GL renderer: " << glGetString(GL_RENDERER) << std::endl;
std::cerr << "GL Version: " << glGetString(GL_VERSION) << std::endl;
std::cerr << "GLSL version: " << glGetString(GL_SHADING_LANGUAGE_VERSION) << std::endl;
int glx_version_major;
int glx_version_minor;
if (glXQueryVersion(m_X11View->m_pDisplay,&glx_version_major,&glx_version_minor))
{
if (ExtensionSupported(m_X11View->m_pDisplay,screen,"GLX_SGIX_fbconfig"))
{
int result = glewInit();
if (GLEW_OK==result)
{
std::cerr << "GLEW init successful" << std::endl;
PFNGLXGETFBCONFIGATTRIBSGIXPROC GetFBConfigAttribSGIX = (PFNGLXGETFBCONFIGATTRIBSGIXPROC)
glXGetProcAddress( (GLubyte*)"glXGetFBConfigAttribSGIX");
PFNGLXCHOOSEFBCONFIGSGIXPROC ChooseFBConfigSGIX = (PFNGLXCHOOSEFBCONFIGSGIXPROC)
glXGetProcAddress( (GLubyte*)"glXChooseFBConfigSGIX");
PFNGLXCREATECONTEXTWITHCONFIGSGIXPROC CreateContextWithConfigSGIX = (PFNGLXCREATECONTEXTWITHCONFIGSGIXPROC)
glXGetProcAddress( (GLubyte*)"glXCreateContextWithConfigSGIX");
PFNGLXGETVISUALFROMFBCONFIGSGIXPROC GetVisualFromFBConfigSGIX = (PFNGLXGETVISUALFROMFBCONFIGSGIXPROC)
glXGetProcAddress( (GLubyte*)"glXGetVisualFromFBConfigSGIX");
int gl_attribs[] = {
GLX_CONTEXT_MAJOR_VERSION_ARB, 4,
GLX_CONTEXT_MINOR_VERSION_ARB, 4,
GLX_CONTEXT_PROFILE_MASK_ARB, GLX_CONTEXT_CORE_PROFILE_BIT_ARB,
//GLX_CONTEXT_FLAGS_ARB, GLX_CONTEXT_DEBUG_BIT_ARB,
0
};
GLXFBConfigSGIX* configs = ChooseFBConfigSGIX( m_X11View->m_pDisplay, screen, NULL, &n_elems );
glXDestroyContext( m_X11View->m_pDisplay, m_X11View->m_GLXContext );
m_X11View->m_GLXContext = glXCreateContextAttribsARB( m_X11View->m_pDisplay, configs[0], 0, true, gl_attribs );
glXMakeCurrent( m_X11View->m_pDisplay, m_X11View->m_hWindow, m_X11View->m_GLXContext );
/*
glDebugMessageCallback( GLDebugLog, NULL );
// setup message control
// disable everything
// enable errors only
glDebugMessageControl( GL_DONT_CARE, GL_DONT_CARE, GL_DONT_CARE, 0, 0, GL_FALSE );
glDebugMessageControl( GL_DEBUG_SOURCE_API, GL_DEBUG_TYPE_ERROR, GL_DONT_CARE,
0, 0, GL_TRUE );
*/
}
else
{
std::cerr << "GLEW init failed: " << glewGetErrorString(result) << std::endl;
}
}
}
}
Related
I am writing a non-graphical command-line tool which calls some OpenGL functions.
const int DEFAULT_VERSION_MAJOR = 4;
const int DEFAULT_VERSION_MINOR = 3;
const auto DEFAULT_PROFILE = QGLFormat::CoreProfile;
void main ()
{
QGLFormat format;
{
format.setVersion (
DEFAULT_VERSION_MAJOR,
DEFAULT_VERSION_MINOR);
format.setProfile (DEFAULT_PROFILE);
}
QGLContext context (format);
// EDIT: this line is failing.
if (false == context.isValid ())
{
std::cerr << "No valid OpenGL context created." << std::endl;
return EXIT_FAILURE;
}
context.makeCurrent ();
if (const GLenum err = glewInit (); GLEW_OK != err)
{
std::cerr
<< __PRETTY_FUNCTION__
<< ": glewInit() returned "
<< glewGetErrorString (err)
<< std::endl;
}
glEnable (GL_DEBUG_OUTPUT);
// SEGMENTATION FAULT
glDebugMessageCallback ((GLDEBUGPROC) message_callback, nullptr);
I assume this is segfaulting because the libraries are not properly initialized (function pointers not set up or whatever).
The GLEW error is Missing GL version.
This tool will need to create OpenGL objects e.g. compile shaders, but not draw anything.
What are the minimum steps to get OpenGL libraries working for a non-graphical application?
(A cross-platform solution would be nice, a Linux-only solution will be fine.)
I forgot to call QGLContext::create. That (probably) answers this question, and I'll post another question directed at the QGLContext problem.
I'm following this tutorial, and now I'm trying to create a window surface. Here's my createSurface():
void createSurface() {
int result = glfwCreateWindowSurface(instance, window, nullptr, &surface);
if(result != VK_SUCCESS) {
const char* description;
int code = glfwGetError(&description);
cout << description << endl;
throw runtime_error("Window surface creation failed");
}
}
When I run the program, I get this in the end:
X11: Vulkan instance missing VK_KHR_xcb_surface extension
terminate called after throwing an instance of 'std::runtime_error'
what(): Window surface creation failed
Aborted (core dumped)
Seems like my instance is just missing the VK_KHR_xcb_surface extension, or is it?
Here's the final part of my createInstance(). This part deals with instance extensions and creates the instance in the end:
// Extensions required by GLFW
uint32_t glfwExtensionsCount = 0;
const char** glfwExtensions;
glfwExtensions = glfwGetRequiredInstanceExtensions(&glfwExtensionsCount);
instanceCreateInfo.enabledExtensionCount = glfwExtensionsCount;
instanceCreateInfo.ppEnabledExtensionNames = glfwExtensions;
// List enabled extensions in stdout
cout << "Enabled extensions" << endl;
for(int i = 0; i < instanceCreateInfo.enabledExtensionCount; i++) {
cout << instanceCreateInfo.ppEnabledExtensionNames[i] << endl;
}
// Create instance
if(vkCreateInstance(&instanceCreateInfo, nullptr, &instance) != VK_SUCCESS) {
runtime_error("Vulkan instance creation failed");
}
This is what it prints out:
Enabled extensions
VK_KHR_surface
VK_KHR_xcb_surface
So VK_KHR_xcb_surface should be enabled, yet GLFW says otherwise at surface creation. What is wrong here?
I also tried with Wayland, but it only changes VK_KHR_xcb_surface into VK_KHR_wayland_surface.
EDIT: This is the output of vulkaninfo.
To my knowledge, when I set these context constraints on GLFW:
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
I should get the maximum available OpenGL context on the running machine, provided that it's above OpenGL 3.3. However, by using glGetString to get the OpenGL context version, I am finding that this is not the case. Everytime I query glGetString for the context version, I only get the major and minor versions I set with glfwWindowHint, nothing above. Keep in mind, my GPU supports OpenGL 4.5.
Another thing to note, when I set no constraints whatsoever, I do in fact get an OpenGL 4.5 context.
Here is the full source code, which seems to replicate the problem:
#define GLEW_STATIC
#include <iostream>
#include <GL\glew.h>
#include <GLFW\glfw3.h>
#include <glm\glm.hpp>
int main(int argc, char argv[])
{
if (!glfwInit())
{
std::cerr << "Failed to initialize GLFW 3.0.4" << std::endl;
getchar();
glfwTerminate();
return -1;
}
std::cout << "Initialized GLFW 3.0.4" << std::endl;
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
GLFWwindow* m_window = glfwCreateWindow(640, 480, "Koala", NULL, NULL);
if (!m_window)
{
std::cerr << "Failed to create OpenGL 3.3+ context" << std::endl;
getchar();
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(m_window);
std::cout << "Created OpenGL 3.3+ context" << std::endl;
glewExperimental = GL_TRUE;
if (glewInit() != GLEW_OK)
{
std::cerr << "Failed to initialize GLEW 1.11.0" << std::endl;
getchar();
glfwTerminate();
return -1;
}
std::cout << "Initialized GLEW 1.11.0" << std::endl;
const GLubyte* renderer = glGetString(GL_RENDERER);
const GLubyte* version = glGetString(GL_VERSION);
std::cout << "GPU: " << renderer << std::endl;
std::cout << "OpenGL Version: " << version << std::endl;
while (!glfwWindowShouldClose(m_window))
{
glfwSwapBuffers(m_window);
glfwPollEvents();
}
glfwTerminate();
return 0;
}
I should get the maximum available OpenGL context on the running machine, provided that it's above OpenGL 3.3.
That's not the way it is defined. From the GLFW documentation:
The GLFW_CONTEXT_VERSION_MAJOR and GLFW_CONTEXT_VERSION_MINOR hints specify the client API version that the created context must be compatible with.
For OpenGL, these hints are not hard constraints, as they don't have to match exactly, but glfwCreateWindow will still fail if the resulting OpenGL version is less than the one requested.
While there is no way to ask the driver for a context of the highest supported version, most drivers provide this when you ask GLFW for a version 1.0 context.
So "most drivers" would give you what you expected, but this is not guaranteed.
The typical usage is that you specify the minimum version that supports all the features that your code uses. Then you don't care if you get exactly that version, or possibly a higher one. But you know that you will get a failure if the minimum version is not supported.
If you want to dynamically test which version is supported, your best bet is probably to first specify the highest version you can take advantage of, and test the return value of glfwCreateWindow(). If it fails, reduce the version as long as it fails, and call glfwCreateWindow() again, until you reach the minimum version you can run with. Then you can keep track of which version succeeded, or the version reported by glGetString(GL_VERSION), and use that to decide at runtime which features you can use.
The version you receive from the glGetString does not mean that functionality is capped at that version. In our experience, we receive a 3.3 context that can do everything that any context from that driver can (I mean 4.2+ version features). You only need to worry about the minimum version you require from the driver.
I am trying to use Nvidia's performance toolkit to identify the performance bottleneck in an OpenGL application. Based on the user guide and the samples provided, I have arrived at this code:
// ********************************************************
// Set up NVPMAPI
#define NVPM_INITGUID
#include "NvPmApi.Manager.h"
// Simple singleton implementation for grabbing the NvPmApi
static NvPmApiManager S_NVPMManager;
NvPmApiManager *GetNvPmApiManager() { return &S_NVPMManager; }
const NvPmApi* getNvPmApi() { return S_NVPMManager.Api(); }
void MyApp::profiledRender()
{
NVPMRESULT nvResult;
nvResult = GetNvPmApiManager()->Construct(L"C:\\Program Files\\PerfKit_4.1.0.14260\\bin\\win7_x64\\NvPmApi.Core.dll");
if (nvResult != S_OK)
{
return; // This is an error condition
}
auto api = getNvPmApi();
nvResult = api->Init();
if ((nvResult) != NVPM_OK)
{
return; // This is an error condition
}
NVPMContext context;
nvResult = api->CreateContextFromOGLContext((uint64_t)::wglGetCurrentContext(), &context);
if (nvResult != NVPM_OK)
{
return; // This is an error condition
}
api->AddCounterByName(context, "GPU Bottleneck");
NVPMUINT nCount(1);
api->BeginExperiment(context, &nCount);
for (NVPMUINT i = 0; i < nCount; i++) {
api->BeginPass(context, i);
render();
glFinish();
api->EndPass(context, i);
}
api->EndExperiment(context);
NVPMUINT64 bottleneckUnitId(42424242);
NVPMUINT64 bottleneckCycles(42424242);
api->GetCounterValueByName(context, "GPU Bottleneck", 0, &bottleneckUnitId, &bottleneckCycles);
char name[256] = { 0 };
NVPMUINT length = 0;
api->GetCounterName(bottleneckUnitId, name, &length);
NVPMUINT64 counterValue(42424242), counterCycles(42424242);
api->GetCounterValue(context, bottleneckUnitId, 0, &counterValue, &counterCycles);
std::cout << "--- NVIDIA Performance Kit GPU profile ---\n"
"bottleneckUnitId: " << bottleneckUnitId
<< ", bottleneckCycles: " << bottleneckCycles
<< ", unit name: " << name
<< ", unit value: " << counterValue
<< ", unit cycles: " << counterCycles
<< std::endl;
}
However, the printed output shows that all of my integer values have been left unmodified:
--- NVIDIA Performance Kit GPU profile ---
bottleneckUnitId: 42424242, bottleneckCycles: 42424242, unit name: , unit value:
42424242, unit cycles: 42424242
I am in a valid GL context when calling profiledRender and while the cast in api->CreateContextFromOGLContext((uint64_t)::wglGetCurrentContext(), &context); looks a tiny bit dodgy it does return an OK result (whereas passing 0 for the context will return a not-OK result and putting in a random number will cause an access violation).
This is built against Cinder 0.8.6 running in x64 on Windows 8.1. Open GL 4.4, GeForce GT 750M.
Ok some more persistent analysis of the API return codes and further examination of the manual revealed the problems.
The render call needs to be wrapped in api->BeginObject(context, 0); and api->EndObject(context, 0);. That gives us a bottleneckUnitId.
It appears that the length pointer passed to GetCounterName both indicates the char array size as an input and is written to with the string length as output. This is kind of obvious on reflection but is a mistake copied from the user guide example. This gives us the name of the bottleneck.
I'm having an issue whenever I call glGenBuffers on windows. Whenever I call glGenBuffers, or any 3.2 or above functions, OpenGL returns an INVALID_OPERATION error. After reading around the web this is probably being caused by not having the updated function pointer for 3.2 on windows. From everything I have read you must acquire the function pointers at runtime by asking the windows API wglGetProcAddress to return the function pointer for use in your program and your current driver. This in itself isn't hard, but why reinvent the wheel. Instead I choose to include GLEW to handle the function pointers for me.
Here is a small program to demonstrate my issue.
#define GLEW_STATIC
#include <GL\glew.h>
#include <GLFW\glfw3.h>
#include <iostream>
void PrintError(void)
{
// Print glGetError()
}
int main()
{
glfwInit();
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT,GL_TRUE);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
GLFWwindow* window = glfwCreateWindow(800, 600, "OpenGL", nullptr, nullptr);
glfwMakeContextCurrent(window);
PrintError();
// Initalize GLEW
glewExperimental = GL_TRUE;
GLenum error = glewInit();
if(error!= GLEW_OK)
std::cerr << glewGetErrorString(error) << std::endl;
// Glew information
if(GLEW_VERSION_3_2)
std::cout << "Supports 3.2.." << std::endl;
// Try buffer
GLuint buffer;
glGenBuffers(1,&buffer); // INVALID_OPERATION
PrintError(); // Error shows up
// This is our main loop
while(!glfwWindowShouldClose(window))
{
glfwSwapBuffers(window);
glfwPollEvents();
}
glfwTerminate();
return 0;
}
When I run the following code I get an OpenGL windows. However, in my console I see that the first call to PrintError returned NO_ERROR while the second call returned INVALID_OPERATION. I have a feeling I'm overlooking some small fact, but I just can't seem to locate it on glfw or GLEWs webpage.
I'm currently running:
glfw-3.0.4 (32bit)
glew-1.10.0 (32bit)
**** Update ****
In response to glampert post I added the following code after the glewInit() method.
GLenum loop_error = glGetError();
while (loop_error != GL_NO_ERROR)
{
switch (loop_error)
{
case GL_NO_ERROR: std::cout << "GL_NO_ERROR" << std::endl; break;
case GL_INVALID_ENUM: std::cout << "GL_INVALID_ENUM" << std::endl; break;
case GL_INVALID_VALUE: std::cout << "GL_INVALID_VALUE" << std::endl; break;
case GL_INVALID_OPERATION: std::cout << "GL_INVALID_OPERATION" << std::endl; break;
case GL_INVALID_FRAMEBUFFER_OPERATION: std::cout << "GL_INVALID_FRAMEBUFFER_OPERATION" << std::endl; break;
case GL_OUT_OF_MEMORY: std::cout << "GL_OUT_OF_MEMORY" << std::endl; break;
}
loop_error = glGetError();
}
Which confirms his assumption that the invalid operation is caused by the glewinit code.
* Update *
It looks like this is a known issue with GLEW & 3.2 context.
http://www.opengl.org/wiki/OpenGL_Loading_Library
After locating GLEW as the trouble maker I located the following two post.
OpenGL: glGetError() returns invalid enum after call to glewInit()
Seems like the suggested solution is to set the experimental flag, which I'm already doing. However, the website mentions that even after doing so their is still the possibility it will cause an invalid operation and fail to grab the function pointers.
I think at this point my best solution is to just grab my own function pointers.
It could very well be that the error you are getting is a residual error from GLFW or GLEW.
Try adding this after the library init code:
GLenum error;
while ((error = glGetError()) != GL_NO_ERROR)
{
// print(error), etc...
}
and before you attempt to call any OpenGL function, to clear the error cache.
If the error is indeed caused by GLEW, it may not necessarily be a dangerous one that stops the library from working. So I wouldn't drop the library just because of this issue, which will eventually be fixed. However, if you do decide to fetch the function pointers yourself, GLFW provides the glfwGetProcAddress function, which will allow you to do so in a portable manner.