libGL errors when executing OpenGL program - c++

I get this error when I try to execute my program:
libGL error: unable to load driver: i965_dri.so
libGL error: driver pointer missing
libGL error: failed to load driver: i965
libGL error: unable to load driver: swrast_dri.so
libGL error: failed to load driver: swrast
X Error of failed request: GLXBadFBConfig
Major opcode of failed request: 154 (GLX)
Minor opcode of failed request: 34 ()
Serial number of failed request: 42
Current serial number in output stream: 41
My Code (I took it from the "OpenGL Development Cookbook" book) :
#include <GL/glew.h>
#include <GL/freeglut.h>
#include <iostream>
const int WIDTH = 640;
const int HEIGHT = 480;
void OnInit()
{
glClearColor(1, 0, 0, 0);
std::cout << "Initialization successfull" << std::endl;
}
void OnShutdown()
{
std::cout << "Shutdown successfull" << std::endl;
}
void OnResize(int nw, int nh)
{
}
void OnRender()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glutSwapBuffers();
}
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA);
glutInitContextVersion(3, 3);
glutInitContextFlags(GLUT_CORE_PROFILE | GLUT_DEBUG);
glutInitContextProfile(GLUT_FORWARD_COMPATIBLE);
glutInitWindowSize(WIDTH, HEIGHT);
glutCreateWindow("OpenGL");
glewExperimental = GL_TRUE;
GLenum err = glewInit();
if(GLEW_OK != err) {std::cerr << "Error: " << glewGetErrorString(err) << std::endl; }
else{if(GLEW_VERSION_3_3) {std::cout << "Driver supports OpenGL 3.3\n Details: " << std::endl; }}
std::cout << "\tUsing glew: " << glewGetString(GLEW_VERSION) << std::endl;
std::cout << "\tVendor: " << glGetString(GL_VENDOR) << std::endl;
std::cout << "\tRenderer: " << glGetString(GL_RENDERER) << std::endl;
std::cout << "\tGLSL: " << glGetString(GL_SHADING_LANGUAGE_VERSION) << std::endl;
OnInit();
glutCloseFunc(OnShutdown);
glutDisplayFunc(OnRender);
glutReshapeFunc(OnResize);
glutMainLoop();
return 0;
}
I verified if my driver supports the OpenGL version I am using with the glxinfo | grep "OpenGL" command:
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) Sandybridge Mobile
OpenGL core profile version string: 3.3 (Core Profile) Mesa 10.5.9
OpenGL core profile shading language version string: 3.30
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 3.0 Mesa 10.5.9
OpenGL shading language version string: 1.30
OpenGL context flags: (none)
OpenGL extensions:
I am using Ubuntu 14.04.3.
I'm not sure but I think I get this error because I am using intel and not Nvidia.

It's hard to tell from a distance, but the errors you have there look like a damaged OpenGL client library installation. glxinfo queries the GLX driver loaded into the Xorg server, which is somewhat independent from the installed libGL (as long as only indirect rendering calls are made). The errors indicate that the installed libGL either doesn't match the DRI drivers or the DRI libraries are damaged.
Either way, the best course of action is to do a clean reinstall of everything related to OpenGL on your system. I.e. do a forced reinstall of xorg-server, xf86-video-…, mesa, libdri… and so on.

I faced a very similar error:
X Error of failed request: GLXBadFBConfig
Major opcode of failed request: 154 (GLX)
Minor opcode of failed request: 34 ()
Serial number of failed request: 42
Current serial number in output stream: 41
Removing the following line solved it:
glutInitContextVersion(3, 3);

Related

OpenCL could not found Intel HD 4000

I'll warn you in advance my written english it is not good, so please have some patience because I'll do a lot of errors.
I need to expose the graphic card in order to do some benchmark with parallel algorithms on finite element analysis. I downloaded the intel sdk at this link https://software.intel.com/en-us/intel-opencl .
I am using Ubuntu 16.10, so i followed all the instruction as explained in this post https://streamcomputing.eu/blog/2011-06-24/install-opencl-on-debianubuntu-orderly/ .
When i run a simple algorithm wich checks all the device, it only recognizes the cpu, failing to find the graphic card. The same program works well on a mac (because OpenCL is in the stack of course).
// includes...
int main(int argc, const char * argv[])
{
// See what standard OpenCL sees
std::vector<cl::Platform> platforms;
// Get platform
cl::Platform::get(&platforms);
// Temp
std::string s;
// Where the GPU lies
cl::Device gpudevice;
// Found a GPU
bool gpufound = false;
std::cout << "**** OPENCL ****" << std::endl;
// See if we have a GPU
for (auto p : platforms)
{
std::vector<cl::Device> devices;
p.getDevices(CL_DEVICE_TYPE_ALL, &devices);
for (auto d : devices)
{
std::size_t i = 4;
d.getInfo(CL_DEVICE_TYPE, &i);
std::cout << "> Device type " <<
(i & CL_DEVICE_TYPE_CPU ? "CPU" : "") <<
(i & CL_DEVICE_TYPE_GPU ? "GPU" : "") <<
(i & CL_DEVICE_TYPE_ACCELERATOR ? "ACCELERATOR" : "");
if (i & CL_DEVICE_TYPE_GPU)
{
gpudevice = d;
gpufound = true;
}
std::cout << " Version " << s << std::endl;
}
}
if (!gpufound)
{
std::cout << "NO GPU FOUND. ABORTING." << std::endl;
return 1;
}
// Do other things...
the output is:
/home/andrea/Dropbox/fem/SiNDy/clfem/cmake-build-debug/vector_sycl
**** OPENCL ****
> Device type CPU Version
NO GPU FOUND. ABORTING.
Process finished with exit code 1
I tried to add the current user in the video group, i also tried to install Intel Media Server Studio following the instructions coming with the package but I could not build the kernel because of some compile errors.
I also updated all the drivers with the automatic software update of Ubuntu, but still the GC is not found.
Maybe you want to try beignet, which is an OpenCL implementation for IvyBridge+ iGPUs. There are packages of beignet for Ubuntu 16.10. To be more precise, I think you are looking for the packages beignet-dev and beignet-opencl-icd. Test it yourself since I have no Ubuntu installation currently available. (However, beignet itself works pretty well on my Intel HD Graphics 520 and Antergos/Arch Linux)

OpenCV gives Assertion failed error when running on GPU using OpenCL

I have an Nvidia GTX 970M GPU & I am trying to run a face detection algorithm in c++ that runs on the GPU using OpenCL.
The function where this error occurs is :
ocl::OclCascadeClassifier::detectMultiScale()
The error I get is :
OpenCV Error: Assertion failed (localThreads[0] * localThreads[1] * localThreads[2] <= kernelWorkGroupSize) in cv::ocl::openCLVerifyKernel
I know that this problem is related to the GPU of the device but I do not know how to fix this. I have tried using OpenCV versions 2 and 3 but both give the same problem.
The problem was that it was trying to use the Intel HD Graphics GPU instead of the Nvidia GPU. I solved this by choosing the Nvidia GPU as the OpenCL Device.
The code I used was :
cv::ocl::DevicesInfo devInfo;
int res = cv::ocl::getOpenCLDevices(devInfo);
if (res == 0)
{
std::cerr << "There is no OPENCL Here !" << std::endl;
}
else
{
for (unsigned int i = 0; i < devInfo.size(); ++i)
{
std::cout << "Device : " << devInfo[i]->deviceName << " is present" << std::endl;
}
}
cv::ocl::setDevice(devInfo[1]);

xCode 8.1 GLFWWindow "first responder" Issue

I have recently been working with OpenGL and have decided to use C++ for my latest project with OpenGL. I am using xCode 8.1 with my library paths and header paths linked correctly. Everything compiles fine but i get this error at runtime:
2016-11-03 15:17:24.649264 Modulo[25303:14858638] [General] ERROR: Setting <GLFWContentView: 0x100343da0> as the first responder for window <GLFWWindow: 0x100222540>, but it is in a different window ((null))! This would eventually crash when the view is freed. The first responder will be set to nil.(
0 AppKit 0x00007fff85c069a4 -[NSWindow _validateFirstResponder:] + 566
1 AppKit 0x00007fff853f79eb -[NSWindow _setFirstResponder:] + 31
2 AppKit 0x00007fff8549f66a -[NSWindow _realMakeFirstResponder:] + 406
3 AppKit 0x00007fff8549f480 -[NSWindow makeFirstResponder:] + 123
4 libglfw3.3.dylib 0x000000010011194a _glfwPlatformCreateWindow + 610
5 libglfw3.3.dylib 0x000000010010d533 glfwCreateWindow + 428
6 Modulo 0x00000001000010a8 main + 296
7 libdyld.dylib 0x00007fff9c828255 start + 1
8 ??? 0x0000000000000001 0x0 + 1)
The code I run to generate this error is as follows:
#include <iostream>
#define GLEW_STATIC
#include <GL/glew.h>
#include <GLFW/glfw3.h>
int main(int argc, const char * argv[]) {
//Engine Startup.
std::cout << "<----- Engine Start-Up ----->" << std::endl;
//Initialize GLFW.
if(!glfwInit()) {
std::cout << "- GLFW Failed to Initialize!" << std::endl;
return -1;
}
std::cout << "+ GLFW Initialized!" << std::endl;
//Create GLFWWindow
GLFWwindow* window = glfwCreateWindow(640, 480, "Engine", nullptr, nullptr);
if(!window) {
std::cout << "- GLFWWindow Failed to Create!" << std::endl;
glfwTerminate();
return -1;
}
std::cout << "+ GLFWWindow Created!" << std::endl;
return 0;
}
The program performs as it should but this error could become an issue later and also makes the console hard to debug so I would like to try and sort it out early!
Thank you in advance and if any more information is needed please let me know! :)
I'm a beginner and I also faced this issue.
I got an error but succeed to create window. How about adding:
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
before glfwCreateWindow?
Note discussion here GLFW first responder error which indicates it is a known bug in macOS Sierra, which has been addressed in the git repo for GLFW, but not yet released.

Why is OpenGL version 0.0?

I was troubleshooting an OpenGL application on a new computer when I discovered that GLFW could not create a window with the specified version of OpenGL. I created a minimal version of the application to test the version of OpenGL created, and no matter what version I hint, the version I get is 0.0. Do I simply not have OpenGL? This seems impossible, since glxgears runs and glxinfo suggests that I have version 2.1.
#include <iostream>
#include <GLFW/glfw3.h>
int main(int argc, const char *argv[]) {
if(!glfwInit()) {
return 1;
}
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 2);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 1);
auto win = glfwCreateWindow(640, 480, "", NULL, NULL);
if(!win) {
return 1;
}
int major = 0, minor = 0;
glfwMakeContextCurrent(win);
glGetIntegerv(GL_MAJOR_VERSION, &major);
glGetIntegerv(GL_MINOR_VERSION, &minor);
std::cout << "Initialized with OpenGL "
<< major << "." << minor << std::endl;
glfwDestroyWindow(win);
glfwTerminate();
}
The output of the application is "Initialized with OpenGL 0.0". A window briefly opens and closes and the application terminates without errors.
The GL_MAJOR_VERSION and GL_MINOR_VERSION queries were introduced in GL 3.0. Prior to that, this will just generate an GL_INVALID_ENUM error during the glGetIntegerv call, and leave your variables untouched.
You have to use glGetString(GL_VERSION) to reliably get the verison number if you can't make sure that you are on a >= 3.0 context. If you need those as numbers, you'll have to manually parse the string.

Changing the OpenGL Context Version for QGLWidgets in Qt 4.8.6 on OS X

I want to use Qt 4.8.6 to render OpenGL content with a QGLWidget. The machine i'm working on is a macbook pro with OS X 10.9.4.
The QGLWidget is created by passing a QGLFormat object with a requested format version of the 3.2 core profile. The problem i am encountering is that the OpenGL version reported by the QGLContext remains 1.0, no matter what GLFormat I specify.
After researching the topic i found the Qt OpenGL Core Profile Tutorial. However the example source code reports the same OpenGL version 1.0 from before. Curiously the call
qDebug() << "Widget OpenGl: " << format().majorVersion() << "." << format().minorVersion();
qDebug() << "Context valid: " << context()->isValid();
qDebug() << "Really used OpenGl: " << context()->format().majorVersion() << "." << context()->format().minorVersion();
qDebug() << "OpenGl information: VENDOR: " << (const char*)glGetString(GL_VENDOR);
qDebug() << " RENDERDER: " << (const char*)glGetString(GL_RENDERER);
qDebug() << " VERSION: " << (const char*)glGetString(GL_VERSION);
qDebug() << " GLSL VERSION: " << (const char*)glGetString(GL_SHADING_LANGUAGE_VERSION);
reported a version string of 2.1
Widget OpenGl: 1 . 0
Context valid: true
Really used OpenGl: 1 . 0
OpenGl information: VENDOR: NVIDIA Corporation
RENDERDER: NVIDIA GeForce GT 750M OpenGL Engine
VERSION: 2.1 NVIDIA-8.26.26 310.40.45f01
GLSL VERSION: 1.20
Using the Cocoa code suggested in this OS X opengl context discussion from 2011 the output of the version numbers changed to
Widget OpenGl: 1 . 0
Context valid: true
Really used OpenGl: 1 . 0
OpenGl information: VENDOR: NVIDIA Corporation
RENDERDER: NVIDIA GeForce GT 750M OpenGL Engine
VERSION: 4.1 NVIDIA-8.26.26 310.40.45f01
GLSL VERSION: 4.10
While the driver is now reporting expected OpenGL version number, i am still only able to get a 1.0 QGLWidget context. The QGLFormat object that is passed to the QGLWidget constructor is set up using
QGLFormat fmt;
fmt.setProfile(QGLFormat::CoreProfile);
fmt.setVersion(3, 2);
fmt.setSampleBuffers(true);
I am somewhat at a loss as to why i am still only getting a version 1.0 context. Even without the Cocoa framework generated OpenGL Context it should be possible to increase the context version to 2.1, but it remains fixed at 1.0 regardless of the QGLFormat passed to the constructor.
Any pointers as to why the QGLWidget Context remains at version 1.0 are very much appreciated.
Update 1
Further experimentation showed that the code returns the requested OpenGL version on a Ubuntu 13.04 Linux. The issue seems to be specific to OS X.
Update 2
I build a minimal non-/working example
#include <QtOpenGL/QGLFormat>
#include <QtOpenGL/QGLWidget>
#include <QtGui/QApplication>
#include <QtCore/QDebug>
int main(int argc, char **argv) {
QApplication app(argc, argv);
QGLFormat fmt = QGLFormat::defaultFormat();
fmt.setVersion(3,2);
fmt.setProfile(QGLFormat::CoreProfile);
fmt.setSampleBuffers(true);
QGLWidget c(fmt);
c.show();
qDebug() << c.context()->requestedFormat();
qDebug() << c.context()->format();
return app.exec();
}
which can be build in Ubuntu using
g++ main.cpp -I/usr/include/qt4 -lQtGui -lQtCore -lQtOpenGL -lGL -o test
or under OS X
g++ main.cpp -framework OpenGL -framework QtGui -framework QtCore -framework QtOpenGL -o test
It prints two lines of QGLFormat debug output. The first is the requested format and the second line is the actual context format. Both are supposed to show a major.minor version number of 3.2. It seems to be working under Ubuntu Linux, but fails when using OS X.
Update 3
Fun times. It might be a bug in Qt4.8.6, since the issue does not occur when compiling the example agains Qt5.3.1. A bug report has been filed.
Can someone else verify this behaviour?
Yes. That's platform specific. Please find solution here.
Override QGLContex::chooseMacVisual to specify platform specific initialization.
CustomGLContext.hpp:
#ifdef Q_WS_MAC
void* select_3_2_mac_visual(GDHandle handle);
#endif // Q_WS_MAC
class CustomGLContext : public QGlContext {
...
#ifdef Q_WS_MAC
void* chooseMacVisual(GDHandle handle) override {
return select_3_2_mac_visual(handle); // call cocoa code
}
#endif // Q_WS_MAC
};
gl_mac_specific.mm:
void* select_3_2_mac_visual(GDHandle handle)
{
static const int Max = 40;
NSOpenGLPixelFormatAttribute attribs[Max];
int cnt = 0;
attribs[cnt++] = NSOpenGLPFAOpenGLProfile;
attribs[cnt++] = NSOpenGLProfileVersion3_2Core;
attribs[cnt++] = NSOpenGLPFADoubleBuffer;
attribs[cnt++] = NSOpenGLPFADepthSize;
attribs[cnt++] = (NSOpenGLPixelFormatAttribute)16;
attribs[cnt] = 0;
Q_ASSERT(cnt < Max);
return [[NSOpenGLPixelFormat alloc] initWithAttributes:attribs];
}