Why is OpenGL version 0.0? - c++

I was troubleshooting an OpenGL application on a new computer when I discovered that GLFW could not create a window with the specified version of OpenGL. I created a minimal version of the application to test the version of OpenGL created, and no matter what version I hint, the version I get is 0.0. Do I simply not have OpenGL? This seems impossible, since glxgears runs and glxinfo suggests that I have version 2.1.
#include <iostream>
#include <GLFW/glfw3.h>
int main(int argc, const char *argv[]) {
if(!glfwInit()) {
return 1;
}
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 2);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 1);
auto win = glfwCreateWindow(640, 480, "", NULL, NULL);
if(!win) {
return 1;
}
int major = 0, minor = 0;
glfwMakeContextCurrent(win);
glGetIntegerv(GL_MAJOR_VERSION, &major);
glGetIntegerv(GL_MINOR_VERSION, &minor);
std::cout << "Initialized with OpenGL "
<< major << "." << minor << std::endl;
glfwDestroyWindow(win);
glfwTerminate();
}
The output of the application is "Initialized with OpenGL 0.0". A window briefly opens and closes and the application terminates without errors.

The GL_MAJOR_VERSION and GL_MINOR_VERSION queries were introduced in GL 3.0. Prior to that, this will just generate an GL_INVALID_ENUM error during the glGetIntegerv call, and leave your variables untouched.
You have to use glGetString(GL_VERSION) to reliably get the verison number if you can't make sure that you are on a >= 3.0 context. If you need those as numbers, you'll have to manually parse the string.

Related

GL_INVALID_OPERATION in glGetIntegerv() with GLAD

I use GLAD (config) to load OpenGL functions and GLFW 3.3.8 to create context. Each time I start my program it pops a ERROR 1282 in glGetIntegerv from GLAD debug post-callback function (as far as I know it is invoked after each gl- function and prints an error if any occurred). I figured that this happens after returning from main().
Here's the code (it loads OpenGL 3.3 and shows red window until it is closed, pretty simple I think):
#include <iostream>
#include <glad/glad.h>
#include <GLFW/glfw3.h>
int main()
{
if(glfwInit() != GLFW_TRUE)
throw std::runtime_error{"Unable to initialize GLFW."};
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
GLFWwindow * w{glfwCreateWindow(100, 100, "title", nullptr, nullptr)};
if(w == nullptr)
throw std::runtime_error{"Unable to create window."};
glfwMakeContextCurrent(w);
if(not gladLoadGLLoader(GLADloadproc(glfwGetProcAddress)))
throw std::runtime_error{"Unable to load OpenGL functions."};
glViewport(0, 0, 100, 100);
while(not glfwWindowShouldClose(w))
{
glfwPollEvents();
glClearColor(1.f, 0.f, 0.f, 1.f);
glClear(GL_COLOR_BUFFER_BIT);
glfwSwapBuffers(w);
}
glfwMakeContextCurrent(nullptr);
glfwDestroyWindow(w);
glfwTerminate();
std::cout << "Hey!" << std::endl;
return 0;
}
The output is:
Hey!
ERROR 1282 in glGetIntegerv
From this callstack:
#0 0x00416f91 in _post_call_callback_default_gl (name=0x446d40 <_glfwDataFormat+10036> "glGetIntegerv", funcptr=0x41c1ec <glad_debug_impl_glGetIntegerv#8>, len_args=2) at <glad.c>:45
#1 0x0041c265 in glad_debug_impl_glGetIntegerv#8 (arg0=33309, arg1=0x4526cc <num_exts_i>) at <glad.c>:1385
#2 0x00417168 in get_exts () at <glad.c>:220
#3 0x0042691f in find_extensionsGL () at <glad.c>:3742
#4 0x00426d12 in gladLoadGLLoader (load=0x402a2e <glfwGetProcAddress>) at <glad.c>:3821
#5 0x004016f8 in main () at <main.cpp>:33
Error 1282 is GL_INVALID_OPERATION, but it pops up after the program ended (or at least after the main() ended). Even if I separate the whole code in another function (i. e. create and destroy everything in separate function), and then invoke it in main(), the error still appears after the return 0; from main().
This did not happen when I used GLEW to load OpenGL functions, but maybe it was silenced. I didn't find anything similar to my problem on the internet. What am I doing wrong? Do I have to unload OpenGL or something like that?
UPD: Error message actually pops in gladLoadGLLoader(), not after the end of main().

New Vulkan project in CLion on Mac OS will not create VkInstance

After my first successful attempt at a 3D engine using Java and OpenGL (LWJGL3), I have decided to try my hand at Vulkan, using C++.
I have barely any experience with C/C++ and I am aware of the steep learning curve of Vulkan. This is however not a problem.
I decided to follow this tutorial: https://vulkan-tutorial.com/Introduction
It has showed me how to create a new project with Vulkan using XCode (as I am on Mac OS Mojave). I would, however, like to continue the rest of the tutorial using CLion as I would be switching between multiple operating systems.
I tried my hand at creating a CLion project and succeeded in making my first CMakeLists file, however something seems to be wrong. The file currently consists of the following:
cmake_minimum_required(VERSION 3.12)
project(VulkanTesting)
set(CMAKE_CXX_STANDARD 14)
add_executable(VulkanTesting main.cpp)
include_directories(/usr/local/include)
include_directories(/Users/[username]/Documents/Vulkan/SDK/vulkansdk-macos-1.1.92.1/macOS/include)
target_link_libraries(VulkanTesting /usr/local/lib/libglfw.3.3.dylib)
target_link_libraries(VulkanTesting /Users/[username]/Documents/Vulkan/SDK/vulkansdk-macos-1.1.92.1/macOS/lib/libvulkan.1.dylib)
target_link_libraries(VulkanTesting /Users/[username]/Documents/Vulkan/SDK/vulkansdk-macos-1.1.92.1/macOS/lib/libvulkan.1.1.92.dylib)
# Don't know if I need the next two lines
link_directories(/usr/local/lib)
link_directories(/Users/[username]/Documents/Vulkan/SDK/vulkansdk-macos-1.1.92.1/macOS/lib)
The reason I showed the above file will become apparent in the question.
The 'Program' so far is the following:
#define GLFW_INCLUDE_VULKAN
#include <GLFW/glfw3.h>
#include <iostream>
#include <stdexcept>
#include <functional>
#include <cstdlib>
#include <vector>
const int WIDTH = 800;
const int HEIGHT = 600;
class HelloTriangleApplication {
public:
void run() {
initWindow();
initVulkan();
mainLoop();
cleanup();
}
private:
GLFWwindow* window;
VkInstance instance;
void initWindow(){
glfwInit();
glfwWindowHint(GLFW_CLIENT_API, GLFW_NO_API);
glfwWindowHint(GLFW_RESIZABLE, GLFW_FALSE);
window = glfwCreateWindow(WIDTH, HEIGHT, "My first Vulkan window", nullptr, nullptr);
}
void initVulkan() {
createInstance();
}
void createInstance(){
// Instantiate Application Info
VkApplicationInfo applicationInfo = {};
applicationInfo.sType = VK_STRUCTURE_TYPE_APPLICATION_INFO;
applicationInfo.pApplicationName = "Hello Triangle";
applicationInfo.applicationVersion = VK_MAKE_VERSION(1,0,0);
applicationInfo.pEngineName = "No Engine";
applicationInfo.engineVersion = VK_MAKE_VERSION(1,0,0);
applicationInfo.apiVersion = VK_API_VERSION_1_0;
// Instantiate Instance Creation Info
VkInstanceCreateInfo createInfo = {};
createInfo.sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO;
createInfo.pApplicationInfo = &applicationInfo;
// Get GLFW platform specific extensions
uint32_t glfwExtensionCount = 0;
const char** glfwExtensions;
glfwExtensions = glfwGetRequiredInstanceExtensions(&glfwExtensionCount);
// Fill in required extensions in Instance Creation Info
createInfo.enabledExtensionCount = glfwExtensionCount;
createInfo.ppEnabledExtensionNames = glfwExtensions;
// For validation layers, this is a later step in the tutorial.
createInfo.enabledLayerCount = 0;
// Create the Vulkan instance, and check if it was successful.
VkResult result = vkCreateInstance(&createInfo, nullptr, &instance);
if(result != VK_SUCCESS){
std::cout << "glfwExtensionCount: " << glfwExtensionCount << "\n";
std::cout << "glfwExtensionNames: " << &glfwExtensions << "\n";
std::cout << "result: " << result << "\n";
throw std::runtime_error("Failed to create Vulkan Instance");
}
}
void mainLoop() {
while(!glfwWindowShouldClose(window)){
glfwPollEvents();
}
}
void cleanup() {
glfwDestroyWindow(window);
glfwTerminate();
}
};
int main() {
HelloTriangleApplication app;
try {
app.run();
} catch (const std::exception& e) {
std::cerr << e.what() << std::endl;
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
The problem I am having is that when I try to run the program, it will not create a VkInstance. The function returns VK_ERROR_INCOMPATIBLE_DRIVER. Now, I doubt that the driver is in fact incompatible as I have run the demo applications that came with the Vulkan SDK for one, and for another I have been able to run the exact same 'program' in XCode. When I investigated the problem a bit further, I noticed that the glfwGetRequiredInstanceExtensions function returns no extensions when the program is run in CLion like this, but does return one in the XCode equivalent.
This all leads me to believe that there is something I have done wrong in linking the libraries/frameworks in the Cmake file because I am aware of the fact that Vulkan is not directly supported in Mac OS, but instead (somehow?) passes through a layer to communicate with Metal.
Do I need to specify a way for the program to pass its Vulkan functionality through a Metal layer, and is this done automagically in XCode, or is there another problem with my approach?
Any help would be greatly appreciated!
You might want to look at the MacOS Getting Started Guide on the LunarXchange website and in your SDK. There is a section at the end that shows how to use CMake to build a Vulkan app and run it on MacOS. You also may want to use the FindVulkan CMake module instead of manually setting the include directories and the target link libraries.
But my first guess about your specific problem is that you may not be setting the VK_ICD_FILENAMES environment variable. You are correct in your observation that there is no direct support for Vulkan. Instead, the support is provided by the MoltenVK library which is treated as a Vulkan driver. But this "driver" is not installed in any system directory by the SDK. The SDK is just unzipped in your home directory structure, so you must tell the Vulkan loader where to find it via this environment variable.
Again, the CMake section at the end of the Getting Started Guide demonstrates the use of this environment variable. And the entire guide goes into additional detail about how the various Vulkan and MoltenVK components work.

xCode 8.1 GLFWWindow "first responder" Issue

I have recently been working with OpenGL and have decided to use C++ for my latest project with OpenGL. I am using xCode 8.1 with my library paths and header paths linked correctly. Everything compiles fine but i get this error at runtime:
2016-11-03 15:17:24.649264 Modulo[25303:14858638] [General] ERROR: Setting <GLFWContentView: 0x100343da0> as the first responder for window <GLFWWindow: 0x100222540>, but it is in a different window ((null))! This would eventually crash when the view is freed. The first responder will be set to nil.(
0 AppKit 0x00007fff85c069a4 -[NSWindow _validateFirstResponder:] + 566
1 AppKit 0x00007fff853f79eb -[NSWindow _setFirstResponder:] + 31
2 AppKit 0x00007fff8549f66a -[NSWindow _realMakeFirstResponder:] + 406
3 AppKit 0x00007fff8549f480 -[NSWindow makeFirstResponder:] + 123
4 libglfw3.3.dylib 0x000000010011194a _glfwPlatformCreateWindow + 610
5 libglfw3.3.dylib 0x000000010010d533 glfwCreateWindow + 428
6 Modulo 0x00000001000010a8 main + 296
7 libdyld.dylib 0x00007fff9c828255 start + 1
8 ??? 0x0000000000000001 0x0 + 1)
The code I run to generate this error is as follows:
#include <iostream>
#define GLEW_STATIC
#include <GL/glew.h>
#include <GLFW/glfw3.h>
int main(int argc, const char * argv[]) {
//Engine Startup.
std::cout << "<----- Engine Start-Up ----->" << std::endl;
//Initialize GLFW.
if(!glfwInit()) {
std::cout << "- GLFW Failed to Initialize!" << std::endl;
return -1;
}
std::cout << "+ GLFW Initialized!" << std::endl;
//Create GLFWWindow
GLFWwindow* window = glfwCreateWindow(640, 480, "Engine", nullptr, nullptr);
if(!window) {
std::cout << "- GLFWWindow Failed to Create!" << std::endl;
glfwTerminate();
return -1;
}
std::cout << "+ GLFWWindow Created!" << std::endl;
return 0;
}
The program performs as it should but this error could become an issue later and also makes the console hard to debug so I would like to try and sort it out early!
Thank you in advance and if any more information is needed please let me know! :)
I'm a beginner and I also faced this issue.
I got an error but succeed to create window. How about adding:
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
before glfwCreateWindow?
Note discussion here GLFW first responder error which indicates it is a known bug in macOS Sierra, which has been addressed in the git repo for GLFW, but not yet released.

libGL errors when executing OpenGL program

I get this error when I try to execute my program:
libGL error: unable to load driver: i965_dri.so
libGL error: driver pointer missing
libGL error: failed to load driver: i965
libGL error: unable to load driver: swrast_dri.so
libGL error: failed to load driver: swrast
X Error of failed request: GLXBadFBConfig
Major opcode of failed request: 154 (GLX)
Minor opcode of failed request: 34 ()
Serial number of failed request: 42
Current serial number in output stream: 41
My Code (I took it from the "OpenGL Development Cookbook" book) :
#include <GL/glew.h>
#include <GL/freeglut.h>
#include <iostream>
const int WIDTH = 640;
const int HEIGHT = 480;
void OnInit()
{
glClearColor(1, 0, 0, 0);
std::cout << "Initialization successfull" << std::endl;
}
void OnShutdown()
{
std::cout << "Shutdown successfull" << std::endl;
}
void OnResize(int nw, int nh)
{
}
void OnRender()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glutSwapBuffers();
}
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA);
glutInitContextVersion(3, 3);
glutInitContextFlags(GLUT_CORE_PROFILE | GLUT_DEBUG);
glutInitContextProfile(GLUT_FORWARD_COMPATIBLE);
glutInitWindowSize(WIDTH, HEIGHT);
glutCreateWindow("OpenGL");
glewExperimental = GL_TRUE;
GLenum err = glewInit();
if(GLEW_OK != err) {std::cerr << "Error: " << glewGetErrorString(err) << std::endl; }
else{if(GLEW_VERSION_3_3) {std::cout << "Driver supports OpenGL 3.3\n Details: " << std::endl; }}
std::cout << "\tUsing glew: " << glewGetString(GLEW_VERSION) << std::endl;
std::cout << "\tVendor: " << glGetString(GL_VENDOR) << std::endl;
std::cout << "\tRenderer: " << glGetString(GL_RENDERER) << std::endl;
std::cout << "\tGLSL: " << glGetString(GL_SHADING_LANGUAGE_VERSION) << std::endl;
OnInit();
glutCloseFunc(OnShutdown);
glutDisplayFunc(OnRender);
glutReshapeFunc(OnResize);
glutMainLoop();
return 0;
}
I verified if my driver supports the OpenGL version I am using with the glxinfo | grep "OpenGL" command:
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) Sandybridge Mobile
OpenGL core profile version string: 3.3 (Core Profile) Mesa 10.5.9
OpenGL core profile shading language version string: 3.30
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 3.0 Mesa 10.5.9
OpenGL shading language version string: 1.30
OpenGL context flags: (none)
OpenGL extensions:
I am using Ubuntu 14.04.3.
I'm not sure but I think I get this error because I am using intel and not Nvidia.
It's hard to tell from a distance, but the errors you have there look like a damaged OpenGL client library installation. glxinfo queries the GLX driver loaded into the Xorg server, which is somewhat independent from the installed libGL (as long as only indirect rendering calls are made). The errors indicate that the installed libGL either doesn't match the DRI drivers or the DRI libraries are damaged.
Either way, the best course of action is to do a clean reinstall of everything related to OpenGL on your system. I.e. do a forced reinstall of xorg-server, xf86-video-…, mesa, libdri… and so on.
I faced a very similar error:
X Error of failed request: GLXBadFBConfig
Major opcode of failed request: 154 (GLX)
Minor opcode of failed request: 34 ()
Serial number of failed request: 42
Current serial number in output stream: 41
Removing the following line solved it:
glutInitContextVersion(3, 3);

Changing the OpenGL Context Version for QGLWidgets in Qt 4.8.6 on OS X

I want to use Qt 4.8.6 to render OpenGL content with a QGLWidget. The machine i'm working on is a macbook pro with OS X 10.9.4.
The QGLWidget is created by passing a QGLFormat object with a requested format version of the 3.2 core profile. The problem i am encountering is that the OpenGL version reported by the QGLContext remains 1.0, no matter what GLFormat I specify.
After researching the topic i found the Qt OpenGL Core Profile Tutorial. However the example source code reports the same OpenGL version 1.0 from before. Curiously the call
qDebug() << "Widget OpenGl: " << format().majorVersion() << "." << format().minorVersion();
qDebug() << "Context valid: " << context()->isValid();
qDebug() << "Really used OpenGl: " << context()->format().majorVersion() << "." << context()->format().minorVersion();
qDebug() << "OpenGl information: VENDOR: " << (const char*)glGetString(GL_VENDOR);
qDebug() << " RENDERDER: " << (const char*)glGetString(GL_RENDERER);
qDebug() << " VERSION: " << (const char*)glGetString(GL_VERSION);
qDebug() << " GLSL VERSION: " << (const char*)glGetString(GL_SHADING_LANGUAGE_VERSION);
reported a version string of 2.1
Widget OpenGl: 1 . 0
Context valid: true
Really used OpenGl: 1 . 0
OpenGl information: VENDOR: NVIDIA Corporation
RENDERDER: NVIDIA GeForce GT 750M OpenGL Engine
VERSION: 2.1 NVIDIA-8.26.26 310.40.45f01
GLSL VERSION: 1.20
Using the Cocoa code suggested in this OS X opengl context discussion from 2011 the output of the version numbers changed to
Widget OpenGl: 1 . 0
Context valid: true
Really used OpenGl: 1 . 0
OpenGl information: VENDOR: NVIDIA Corporation
RENDERDER: NVIDIA GeForce GT 750M OpenGL Engine
VERSION: 4.1 NVIDIA-8.26.26 310.40.45f01
GLSL VERSION: 4.10
While the driver is now reporting expected OpenGL version number, i am still only able to get a 1.0 QGLWidget context. The QGLFormat object that is passed to the QGLWidget constructor is set up using
QGLFormat fmt;
fmt.setProfile(QGLFormat::CoreProfile);
fmt.setVersion(3, 2);
fmt.setSampleBuffers(true);
I am somewhat at a loss as to why i am still only getting a version 1.0 context. Even without the Cocoa framework generated OpenGL Context it should be possible to increase the context version to 2.1, but it remains fixed at 1.0 regardless of the QGLFormat passed to the constructor.
Any pointers as to why the QGLWidget Context remains at version 1.0 are very much appreciated.
Update 1
Further experimentation showed that the code returns the requested OpenGL version on a Ubuntu 13.04 Linux. The issue seems to be specific to OS X.
Update 2
I build a minimal non-/working example
#include <QtOpenGL/QGLFormat>
#include <QtOpenGL/QGLWidget>
#include <QtGui/QApplication>
#include <QtCore/QDebug>
int main(int argc, char **argv) {
QApplication app(argc, argv);
QGLFormat fmt = QGLFormat::defaultFormat();
fmt.setVersion(3,2);
fmt.setProfile(QGLFormat::CoreProfile);
fmt.setSampleBuffers(true);
QGLWidget c(fmt);
c.show();
qDebug() << c.context()->requestedFormat();
qDebug() << c.context()->format();
return app.exec();
}
which can be build in Ubuntu using
g++ main.cpp -I/usr/include/qt4 -lQtGui -lQtCore -lQtOpenGL -lGL -o test
or under OS X
g++ main.cpp -framework OpenGL -framework QtGui -framework QtCore -framework QtOpenGL -o test
It prints two lines of QGLFormat debug output. The first is the requested format and the second line is the actual context format. Both are supposed to show a major.minor version number of 3.2. It seems to be working under Ubuntu Linux, but fails when using OS X.
Update 3
Fun times. It might be a bug in Qt4.8.6, since the issue does not occur when compiling the example agains Qt5.3.1. A bug report has been filed.
Can someone else verify this behaviour?
Yes. That's platform specific. Please find solution here.
Override QGLContex::chooseMacVisual to specify platform specific initialization.
CustomGLContext.hpp:
#ifdef Q_WS_MAC
void* select_3_2_mac_visual(GDHandle handle);
#endif // Q_WS_MAC
class CustomGLContext : public QGlContext {
...
#ifdef Q_WS_MAC
void* chooseMacVisual(GDHandle handle) override {
return select_3_2_mac_visual(handle); // call cocoa code
}
#endif // Q_WS_MAC
};
gl_mac_specific.mm:
void* select_3_2_mac_visual(GDHandle handle)
{
static const int Max = 40;
NSOpenGLPixelFormatAttribute attribs[Max];
int cnt = 0;
attribs[cnt++] = NSOpenGLPFAOpenGLProfile;
attribs[cnt++] = NSOpenGLProfileVersion3_2Core;
attribs[cnt++] = NSOpenGLPFADoubleBuffer;
attribs[cnt++] = NSOpenGLPFADepthSize;
attribs[cnt++] = (NSOpenGLPixelFormatAttribute)16;
attribs[cnt] = 0;
Q_ASSERT(cnt < Max);
return [[NSOpenGLPixelFormat alloc] initWithAttributes:attribs];
}