Initializing opengl non-graphical application - opengl

I am writing a non-graphical command-line tool which calls some OpenGL functions.
const int DEFAULT_VERSION_MAJOR = 4;
const int DEFAULT_VERSION_MINOR = 3;
const auto DEFAULT_PROFILE = QGLFormat::CoreProfile;
void main ()
{
QGLFormat format;
{
format.setVersion (
DEFAULT_VERSION_MAJOR,
DEFAULT_VERSION_MINOR);
format.setProfile (DEFAULT_PROFILE);
}
QGLContext context (format);
// EDIT: this line is failing.
if (false == context.isValid ())
{
std::cerr << "No valid OpenGL context created." << std::endl;
return EXIT_FAILURE;
}
context.makeCurrent ();
if (const GLenum err = glewInit (); GLEW_OK != err)
{
std::cerr
<< __PRETTY_FUNCTION__
<< ": glewInit() returned "
<< glewGetErrorString (err)
<< std::endl;
}
glEnable (GL_DEBUG_OUTPUT);
// SEGMENTATION FAULT
glDebugMessageCallback ((GLDEBUGPROC) message_callback, nullptr);
I assume this is segfaulting because the libraries are not properly initialized (function pointers not set up or whatever).
The GLEW error is Missing GL version.
This tool will need to create OpenGL objects e.g. compile shaders, but not draw anything.
What are the minimum steps to get OpenGL libraries working for a non-graphical application?
(A cross-platform solution would be nice, a Linux-only solution will be fine.)

I forgot to call QGLContext::create. That (probably) answers this question, and I'll post another question directed at the QGLContext problem.

Related

GLFW throws "X11: Vulkan instance missing VK_KHR_xcb_surface extension" even when it is in ppEnabledExtensionNames

I'm following this tutorial, and now I'm trying to create a window surface. Here's my createSurface():
void createSurface() {
int result = glfwCreateWindowSurface(instance, window, nullptr, &surface);
if(result != VK_SUCCESS) {
const char* description;
int code = glfwGetError(&description);
cout << description << endl;
throw runtime_error("Window surface creation failed");
}
}
When I run the program, I get this in the end:
X11: Vulkan instance missing VK_KHR_xcb_surface extension
terminate called after throwing an instance of 'std::runtime_error'
what(): Window surface creation failed
Aborted (core dumped)
Seems like my instance is just missing the VK_KHR_xcb_surface extension, or is it?
Here's the final part of my createInstance(). This part deals with instance extensions and creates the instance in the end:
// Extensions required by GLFW
uint32_t glfwExtensionsCount = 0;
const char** glfwExtensions;
glfwExtensions = glfwGetRequiredInstanceExtensions(&glfwExtensionsCount);
instanceCreateInfo.enabledExtensionCount = glfwExtensionsCount;
instanceCreateInfo.ppEnabledExtensionNames = glfwExtensions;
// List enabled extensions in stdout
cout << "Enabled extensions" << endl;
for(int i = 0; i < instanceCreateInfo.enabledExtensionCount; i++) {
cout << instanceCreateInfo.ppEnabledExtensionNames[i] << endl;
}
// Create instance
if(vkCreateInstance(&instanceCreateInfo, nullptr, &instance) != VK_SUCCESS) {
runtime_error("Vulkan instance creation failed");
}
This is what it prints out:
Enabled extensions
VK_KHR_surface
VK_KHR_xcb_surface
So VK_KHR_xcb_surface should be enabled, yet GLFW says otherwise at surface creation. What is wrong here?
I also tried with Wayland, but it only changes VK_KHR_xcb_surface into VK_KHR_wayland_surface.
EDIT: This is the output of vulkaninfo.

How to create an OpenGL context on an NodeJS native addon on MacOS?

Follow-up for this question.
I'm trying to create a NodeJS native addon that uses OpenGL.
I'm not able to use OpenGL functions because CGLGetCurrentContext() always returns NULL.
When trying to create a new context to draw into, CGLChoosePixelFormat always returns the error kCGLBadConnection invalid CoreGraphics connection.
What is bugging me out is that when I isolate the code that creates the OpenGL context into a standalone CPP project, it works! It just gives an error when I run it inside the NodeJS addon!
I created this NodeJS native addon project to exemplify my error: https://github.com/Psidium/node-opengl-context-error-example
This is the code that works when executed on a standalone project and errors out when running inside NodeJS:
//
// main.cpp
// test_cli
//
// Created by Borges, Gabriel on 4/3/20.
// Copyright © 2020 Psidium. All rights reserved.
//
#include <iostream>
#include <OpenGL/OpenGL.h>
int main(int argc, const char * argv[]) {
std::cout << "Context before creating it: " << CGLGetCurrentContext() << "\n";
CGLContextObj context;
CGLPixelFormatAttribute attributes[2] = {
kCGLPFAAccelerated, // no software rendering
(CGLPixelFormatAttribute) 0
};
CGLPixelFormatObj pix;
CGLError errorCode;
GLint num; // stores the number of possible pixel formats
errorCode = CGLChoosePixelFormat( attributes, &pix, &num );
if (errorCode > 0) {
std::cout << ": Error returned by choosePixelFormat: " << errorCode << "\n";
return 10;
}
errorCode = CGLCreateContext( pix, NULL, &context );
if (errorCode > 0) {
std::cout << ": Error returned by CGLCreateContext: " << errorCode << "\n";
return 10 ;
}
CGLDestroyPixelFormat( pix );
errorCode = CGLSetCurrentContext( context );
if (errorCode > 0) {
std::cout << "Error returned by CGLSetCurrentContext: " << errorCode << "\n";
return 10;
}
std::cout << "Context after being created is: " << CGLGetCurrentContext() << "\n";
return 0;
}
I already tried:
Using fork() to create a context in a subprocess (didn't work);
Changing the pixelformat attributes to something that will create my context (didn't work);
I have a hunch that it may have something to do with the fact that a Node native addon is a dynamically linked library, or maybe my OpenGL createContext function may not be executing on the main thread (but if this was the case, the fork() would have solved it, right?).
Accessing graphics hardware requires extra permissions - Windows and macOS (don't know for others) restrict creation of hardware-accelerated OpenGL context to interactive user session (I may be wrong with the terms here). From one of articles on the web:
In case the user is not logged in, the CGLChoosePixelFormat will return kCGLBadConnection
Interactive session is easier to feel than to understand; e.g. when you interactively login and launch application - it is interactive session; when process is started as service - it is non-interactive. How this is actually managed by system requires deeper reading. As far as I know, there is no easy way "escaping" non-interactive process flag.
NodeJS can be used as part of a web-server, so that I may expect that it can be exactly the problem - it is started as a service, by another non-interactive user or has other special conditions making it non-interactive. So maybe more details on how you use / start NodeJS itself might clarify why the code doesn't work. But I may expect that using OpenGL on server part might be not a good idea anyway (if this is a target); although it might be possible that software OpenGL implementation (without kCGLPFAAccelerated flag might work).
By the way, there are at least two OpenGL / WebGL extensions to NodeJS - have you tried their samples to see if they behave in the same or different way in your environment?
https://github.com/google/node-gles
https://github.com/mikeseven/node-webgl

GLX create GL 4.x context with GLEW crashes glXChooseFBConfig

I have problems creating a GL 4.x context using plain GLX and GLEW (1.12). All solutions I found here are not working for me.
What I am trying to do (and what I found in other questions): Create a 'base' GL context, call glewInit(), then create the proper GL 4.x context using glXCreateContextAttribsARB(...). This did not work for me, since glXChooseFBConfig keeps segfaulting.
I read that I have to call glxewInit first, but that didn't work. Building without GLEW_MX defined resulted in glxewInit not being available. Building with GLEW_MX defined resulted in the following compile error:
error: 'glxewGetContext' was not declared in this scope #define glxewInit() glxewContextInit(glxewGetContext())
note: in expansion of macro 'glewxInit' auto result = glxewInit()
When I ommit calling glxewInit() the application crashes when calling glXChooseFBConfig(...)
I am rather stuck here. What is the proper way to get a GL 4.x context using plain GLX? (I cannot use glfw or something similiar, I am working on a given application, and I get a Display pointer and a Window id to work with, the window is already there)
Thanks to the hint from Nicol I was able to fix the problem myself. Looking at the glfw code I did the following (might not apply to everyone).
Create a default / old-school GL context using the given Display pointer and Window id. I also call glXQueryExtension to check for proper glX support.
After creating the default context I call glewInit but created function pointers to glXChooseFBConfigSGIX 'by hand'. Don't really know why I had to use the SGIX Extension, didn't bother to take a deeper look.
This way I was able to get a proper FBConfig structure to call glXCreateContextAttribsARB and create a new context after destroying the old one
Et voilá, I get my working GL 4.x context
Thanks againg to Nicol, I don't know why I didn't think of looking at other code.
Here's my solution:
XMapWindow( m_X11View->m_pDisplay, m_X11View->m_hWindow );
XWindowAttributes watt;
XGetWindowAttributes( m_X11View->m_pDisplay, m_X11View->m_hWindow, &watt );
XVisualInfo temp;
temp.visualid = XVisualIDFromVisual(watt.visual);
int n;
XVisualInfo* visual = XGetVisualInfo( m_X11View->m_pDisplay, VisualIDMask, &temp, &n );
int n_elems = 0;
if (glXQueryExtension(m_X11View->m_pDisplay,NULL,NULL))
{
// create dummy base context to init glew, create proper 4.x context
m_X11View->m_GLXContext = glXCreateContext( m_X11View->m_pDisplay, visual, 0, true );
glXMakeCurrent( m_X11View->m_pDisplay, m_X11View->m_hWindow, m_X11View->m_GLXContext );
// some debug output stuff
std::cerr << "GL vendor: " << glGetString(GL_VENDOR) << std::endl;
std::cerr << "GL renderer: " << glGetString(GL_RENDERER) << std::endl;
std::cerr << "GL Version: " << glGetString(GL_VERSION) << std::endl;
std::cerr << "GLSL version: " << glGetString(GL_SHADING_LANGUAGE_VERSION) << std::endl;
int glx_version_major;
int glx_version_minor;
if (glXQueryVersion(m_X11View->m_pDisplay,&glx_version_major,&glx_version_minor))
{
if (ExtensionSupported(m_X11View->m_pDisplay,screen,"GLX_SGIX_fbconfig"))
{
int result = glewInit();
if (GLEW_OK==result)
{
std::cerr << "GLEW init successful" << std::endl;
PFNGLXGETFBCONFIGATTRIBSGIXPROC GetFBConfigAttribSGIX = (PFNGLXGETFBCONFIGATTRIBSGIXPROC)
glXGetProcAddress( (GLubyte*)"glXGetFBConfigAttribSGIX");
PFNGLXCHOOSEFBCONFIGSGIXPROC ChooseFBConfigSGIX = (PFNGLXCHOOSEFBCONFIGSGIXPROC)
glXGetProcAddress( (GLubyte*)"glXChooseFBConfigSGIX");
PFNGLXCREATECONTEXTWITHCONFIGSGIXPROC CreateContextWithConfigSGIX = (PFNGLXCREATECONTEXTWITHCONFIGSGIXPROC)
glXGetProcAddress( (GLubyte*)"glXCreateContextWithConfigSGIX");
PFNGLXGETVISUALFROMFBCONFIGSGIXPROC GetVisualFromFBConfigSGIX = (PFNGLXGETVISUALFROMFBCONFIGSGIXPROC)
glXGetProcAddress( (GLubyte*)"glXGetVisualFromFBConfigSGIX");
int gl_attribs[] = {
GLX_CONTEXT_MAJOR_VERSION_ARB, 4,
GLX_CONTEXT_MINOR_VERSION_ARB, 4,
GLX_CONTEXT_PROFILE_MASK_ARB, GLX_CONTEXT_CORE_PROFILE_BIT_ARB,
//GLX_CONTEXT_FLAGS_ARB, GLX_CONTEXT_DEBUG_BIT_ARB,
0
};
GLXFBConfigSGIX* configs = ChooseFBConfigSGIX( m_X11View->m_pDisplay, screen, NULL, &n_elems );
glXDestroyContext( m_X11View->m_pDisplay, m_X11View->m_GLXContext );
m_X11View->m_GLXContext = glXCreateContextAttribsARB( m_X11View->m_pDisplay, configs[0], 0, true, gl_attribs );
glXMakeCurrent( m_X11View->m_pDisplay, m_X11View->m_hWindow, m_X11View->m_GLXContext );
/*
glDebugMessageCallback( GLDebugLog, NULL );
// setup message control
// disable everything
// enable errors only
glDebugMessageControl( GL_DONT_CARE, GL_DONT_CARE, GL_DONT_CARE, 0, 0, GL_FALSE );
glDebugMessageControl( GL_DEBUG_SOURCE_API, GL_DEBUG_TYPE_ERROR, GL_DONT_CARE,
0, 0, GL_TRUE );
*/
}
else
{
std::cerr << "GLEW init failed: " << glewGetErrorString(result) << std::endl;
}
}
}
}

How do I perform a 'simplified experiment' with Nvidia's Performance Toolkit?

I am trying to use Nvidia's performance toolkit to identify the performance bottleneck in an OpenGL application. Based on the user guide and the samples provided, I have arrived at this code:
// ********************************************************
// Set up NVPMAPI
#define NVPM_INITGUID
#include "NvPmApi.Manager.h"
// Simple singleton implementation for grabbing the NvPmApi
static NvPmApiManager S_NVPMManager;
NvPmApiManager *GetNvPmApiManager() { return &S_NVPMManager; }
const NvPmApi* getNvPmApi() { return S_NVPMManager.Api(); }
void MyApp::profiledRender()
{
NVPMRESULT nvResult;
nvResult = GetNvPmApiManager()->Construct(L"C:\\Program Files\\PerfKit_4.1.0.14260\\bin\\win7_x64\\NvPmApi.Core.dll");
if (nvResult != S_OK)
{
return; // This is an error condition
}
auto api = getNvPmApi();
nvResult = api->Init();
if ((nvResult) != NVPM_OK)
{
return; // This is an error condition
}
NVPMContext context;
nvResult = api->CreateContextFromOGLContext((uint64_t)::wglGetCurrentContext(), &context);
if (nvResult != NVPM_OK)
{
return; // This is an error condition
}
api->AddCounterByName(context, "GPU Bottleneck");
NVPMUINT nCount(1);
api->BeginExperiment(context, &nCount);
for (NVPMUINT i = 0; i < nCount; i++) {
api->BeginPass(context, i);
render();
glFinish();
api->EndPass(context, i);
}
api->EndExperiment(context);
NVPMUINT64 bottleneckUnitId(42424242);
NVPMUINT64 bottleneckCycles(42424242);
api->GetCounterValueByName(context, "GPU Bottleneck", 0, &bottleneckUnitId, &bottleneckCycles);
char name[256] = { 0 };
NVPMUINT length = 0;
api->GetCounterName(bottleneckUnitId, name, &length);
NVPMUINT64 counterValue(42424242), counterCycles(42424242);
api->GetCounterValue(context, bottleneckUnitId, 0, &counterValue, &counterCycles);
std::cout << "--- NVIDIA Performance Kit GPU profile ---\n"
"bottleneckUnitId: " << bottleneckUnitId
<< ", bottleneckCycles: " << bottleneckCycles
<< ", unit name: " << name
<< ", unit value: " << counterValue
<< ", unit cycles: " << counterCycles
<< std::endl;
}
However, the printed output shows that all of my integer values have been left unmodified:
--- NVIDIA Performance Kit GPU profile ---
bottleneckUnitId: 42424242, bottleneckCycles: 42424242, unit name: , unit value:
42424242, unit cycles: 42424242
I am in a valid GL context when calling profiledRender and while the cast in api->CreateContextFromOGLContext((uint64_t)::wglGetCurrentContext(), &context); looks a tiny bit dodgy it does return an OK result (whereas passing 0 for the context will return a not-OK result and putting in a random number will cause an access violation).
This is built against Cinder 0.8.6 running in x64 on Windows 8.1. Open GL 4.4, GeForce GT 750M.
Ok some more persistent analysis of the API return codes and further examination of the manual revealed the problems.
The render call needs to be wrapped in api->BeginObject(context, 0); and api->EndObject(context, 0);. That gives us a bottleneckUnitId.
It appears that the length pointer passed to GetCounterName both indicates the char array size as an input and is written to with the string length as output. This is kind of obvious on reflection but is a mistake copied from the user guide example. This gives us the name of the bottleneck.

glGenBuffers - GL_INVALID_OPERATION

I'm having an issue whenever I call glGenBuffers on windows. Whenever I call glGenBuffers, or any 3.2 or above functions, OpenGL returns an INVALID_OPERATION error. After reading around the web this is probably being caused by not having the updated function pointer for 3.2 on windows. From everything I have read you must acquire the function pointers at runtime by asking the windows API wglGetProcAddress to return the function pointer for use in your program and your current driver. This in itself isn't hard, but why reinvent the wheel. Instead I choose to include GLEW to handle the function pointers for me.
Here is a small program to demonstrate my issue.
#define GLEW_STATIC
#include <GL\glew.h>
#include <GLFW\glfw3.h>
#include <iostream>
void PrintError(void)
{
// Print glGetError()
}
int main()
{
glfwInit();
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT,GL_TRUE);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
GLFWwindow* window = glfwCreateWindow(800, 600, "OpenGL", nullptr, nullptr);
glfwMakeContextCurrent(window);
PrintError();
// Initalize GLEW
glewExperimental = GL_TRUE;
GLenum error = glewInit();
if(error!= GLEW_OK)
std::cerr << glewGetErrorString(error) << std::endl;
// Glew information
if(GLEW_VERSION_3_2)
std::cout << "Supports 3.2.." << std::endl;
// Try buffer
GLuint buffer;
glGenBuffers(1,&buffer); // INVALID_OPERATION
PrintError(); // Error shows up
// This is our main loop
while(!glfwWindowShouldClose(window))
{
glfwSwapBuffers(window);
glfwPollEvents();
}
glfwTerminate();
return 0;
}
When I run the following code I get an OpenGL windows. However, in my console I see that the first call to PrintError returned NO_ERROR while the second call returned INVALID_OPERATION. I have a feeling I'm overlooking some small fact, but I just can't seem to locate it on glfw or GLEWs webpage.
I'm currently running:
glfw-3.0.4 (32bit)
glew-1.10.0 (32bit)
**** Update ****
In response to glampert post I added the following code after the glewInit() method.
GLenum loop_error = glGetError();
while (loop_error != GL_NO_ERROR)
{
switch (loop_error)
{
case GL_NO_ERROR: std::cout << "GL_NO_ERROR" << std::endl; break;
case GL_INVALID_ENUM: std::cout << "GL_INVALID_ENUM" << std::endl; break;
case GL_INVALID_VALUE: std::cout << "GL_INVALID_VALUE" << std::endl; break;
case GL_INVALID_OPERATION: std::cout << "GL_INVALID_OPERATION" << std::endl; break;
case GL_INVALID_FRAMEBUFFER_OPERATION: std::cout << "GL_INVALID_FRAMEBUFFER_OPERATION" << std::endl; break;
case GL_OUT_OF_MEMORY: std::cout << "GL_OUT_OF_MEMORY" << std::endl; break;
}
loop_error = glGetError();
}
Which confirms his assumption that the invalid operation is caused by the glewinit code.
* Update *
It looks like this is a known issue with GLEW & 3.2 context.
http://www.opengl.org/wiki/OpenGL_Loading_Library
After locating GLEW as the trouble maker I located the following two post.
OpenGL: glGetError() returns invalid enum after call to glewInit()
Seems like the suggested solution is to set the experimental flag, which I'm already doing. However, the website mentions that even after doing so their is still the possibility it will cause an invalid operation and fail to grab the function pointers.
I think at this point my best solution is to just grab my own function pointers.
It could very well be that the error you are getting is a residual error from GLFW or GLEW.
Try adding this after the library init code:
GLenum error;
while ((error = glGetError()) != GL_NO_ERROR)
{
// print(error), etc...
}
and before you attempt to call any OpenGL function, to clear the error cache.
If the error is indeed caused by GLEW, it may not necessarily be a dangerous one that stops the library from working. So I wouldn't drop the library just because of this issue, which will eventually be fixed. However, if you do decide to fetch the function pointers yourself, GLFW provides the glfwGetProcAddress function, which will allow you to do so in a portable manner.