I'm using Qt Creator on linux mint and I try to run an opengl program.
The building doesn't give any errors, yet when I try to run the program inside Qt Creator, a terminal window appears, and nothing else happens. When I run the program directly in terminal I get the following output:
OpenGL version supported by this platform (3.3.0 NVIDIA 295.40):
OpenGL 3.3.0 NVIDIA 295.40, GLSL 3.30 NVIDIA via Cg compiler
Ready for OpenGL 2.0
Segmentation fault (core dumped)
My code:
#include <stdio.h>
#include <GL/glew.h>
#include <GL/gl.h>
#include <GL/glu.h>
#include <GL/glut.h>
#include <GL/glext.h>
int main(int argc, char **argv) {
glutInit(&argc, argv);
glutInitWindowSize(600, 600);
glutInitWindowPosition(100, 100);
glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE | GLUT_DEPTH);
glutCreateWindow("Aquarium");
glutDisplayFunc(onDisplay);
glutMouseFunc(onMouse);
glutIdleFunc(onIdle);
glutKeyboardFunc(onKeyboard);
glutReshapeFunc(onReshape);
printf("OpenGL version supported by this platform (%s): \n", glGetString(GL_VERSION));
printf("OpenGL %s, GLSL %s\n",glGetString(GL_VERSION),glGetString(GL_SHADING_LANGUAGE_VERSION));
glewInit();
if (glewIsSupported("GL_VERSION_2_0"))
printf("Ready for OpenGL 2.0\n");
else {
printf("OpenGL 2.0 not supported\n");
exit(1);
}
onInitialization();
glutMainLoop();
return 0;
}
I have defined the event handlers, and I have the onInitialization() method too.
If I try to printf something in the beginning of the onInitialization() method, the program doesn't write anything else apart from the rows I wrote earlier. So it doesn't step inside
onInitialization() I think. I can't even debug this program in Qt Creator. What can cause this? And what can cause that I can't start the program inside Qt Creator? I have a .pro file:
QT -= gui core
TARGET = main
CONFIG += console
TEMPLATE = app
LIBS += /usr/lib/libglut.so /usr/lib/compiz/libopengl.so /usr/lib/i386-linux-gnu/libGLU.so /usr/lib/i386-linux-gnu/libGLEW.so
QMAKE_CXXFLAGS += -W -Wall -Wextra -pedantic
SOURCES += main.cpp
With the same setting I've been able to run the program under Windows (of course the LIBS were different there).
The onInitialization():
void onInitialization( ) {
printf("onInitialization");
soft.controlpoints.push_back(Point(5., 5., 5.));
soft.speeds.push_back(Point(0., 0., 0.));
soft.controlpoints.push_back(Point(5, -5., 5.));
soft.speeds.push_back(Point(0., 0., 0.));
soft.controlpoints.push_back(Point(5., -5., -5.));
soft.speeds.push_back(Point(0., 0., 0.));
soft2.controlpoints.push_back(Point(5., 5., 5.));
soft2.speeds.push_back(Point(0., 0., 0.));
soft2.controlpoints.push_back(Point(5, -5., 5.));
soft2.speeds.push_back(Point(0., 0., 0.));
soft2.controlpoints.push_back(Point(5., -5., -5.));
soft2.speeds.push_back(Point(0., 0., 0.));
soft2.controlpoints.push_back(Point(-5., 5., -5.));
soft2.speeds.push_back(Point(0., 0., 0.));
soft.set_r();
soft2.set_r();
aquarium.objects.push_back(&water);
aquarium.objects.push_back(&fish);
aquarium.objects.push_back(&field);
aquarium.objects.push_back(&soft2);
aquarium.createMaterials();
aquarium.createVoxelArray();
glEnable(GL_DEPTH_TEST);
lastMovingTime = 0.;
setShadowMapShaders();
setAquariumShaders();
}
It doesn't print even the "onInitialization" string. The objects in the method are global variables, and all of the methods that are called here is implemented. What can cause that it doesn't print the "onInitialization" string? soft.controlpoints, soft.speeds, and aquarium.object are public fields. Even if I comment everything else, the string doesn't appear, but the window is created. And it's still not running from inside the Qt Creator.
It turns out that the problem was with file reader method (that reads in the GLSL shaders codes) and this was because Qt Creator's working directory wasn't set on the directory which contained the shaders source files.
Related
I was troubleshooting an OpenGL application on a new computer when I discovered that GLFW could not create a window with the specified version of OpenGL. I created a minimal version of the application to test the version of OpenGL created, and no matter what version I hint, the version I get is 0.0. Do I simply not have OpenGL? This seems impossible, since glxgears runs and glxinfo suggests that I have version 2.1.
#include <iostream>
#include <GLFW/glfw3.h>
int main(int argc, const char *argv[]) {
if(!glfwInit()) {
return 1;
}
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 2);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 1);
auto win = glfwCreateWindow(640, 480, "", NULL, NULL);
if(!win) {
return 1;
}
int major = 0, minor = 0;
glfwMakeContextCurrent(win);
glGetIntegerv(GL_MAJOR_VERSION, &major);
glGetIntegerv(GL_MINOR_VERSION, &minor);
std::cout << "Initialized with OpenGL "
<< major << "." << minor << std::endl;
glfwDestroyWindow(win);
glfwTerminate();
}
The output of the application is "Initialized with OpenGL 0.0". A window briefly opens and closes and the application terminates without errors.
The GL_MAJOR_VERSION and GL_MINOR_VERSION queries were introduced in GL 3.0. Prior to that, this will just generate an GL_INVALID_ENUM error during the glGetIntegerv call, and leave your variables untouched.
You have to use glGetString(GL_VERSION) to reliably get the verison number if you can't make sure that you are on a >= 3.0 context. If you need those as numbers, you'll have to manually parse the string.
This question already has answers here:
Why does glGetString(GL_VERSION) return null / zero instead of the OpenGL version?
(2 answers)
Closed 7 years ago.
My IDE can't recognize glActiveTexture method.
I have installed freeglut and GLEW lib, when I build my project IDE doesn't show any error but when i run program i have this "has stopped working" type error. I don't really know how to fix it and what causes this problem.
Another think is that IDE know the name of the function(this #thing) but I gues don't know the function itself (it should be () symbol just like in first function).
glActiveTexture
I hope someone know solution for this problem.
Edit1
Here is mine example code:
#define GLEW_STATIC
#ifdef __APPLE__
#include <GLUT/glut.h>
#else
#include <GL/glew.h>
#include <GL/gl.h>
#include <GL/glut.h>
#endif
#include <iostream>
#include <stdlib.h>
using namespace std;
int main(int argc, char *argv[])
{
glutInit(&argc, argv);
GLenum err = glewInit();
if (GLEW_OK != err)
{
cout<<"Error: "<<glewGetErrorString(err)<<endl;
}
else cout<<"Initialized"<<endl;
return EXIT_SUCCESS;
}
and I'm getting Error: Missing GL version
Here is glewinfo:
GLEW version 1.13.0
Reporting capabilities of pixelformat 3
Running on a Intel(R) HD Graphics 4600 from Intel
OpenGL version 4.3.0 - Build 10.18.10.3960 is supported
You need to create an OpenGL rendering context before calling glewInit:
glutInit(&argc, argv);
glutCreateWindow("My Program");
GLenum err = glewInit();
See here for details.
I want to use Qt 4.8.6 to render OpenGL content with a QGLWidget. The machine i'm working on is a macbook pro with OS X 10.9.4.
The QGLWidget is created by passing a QGLFormat object with a requested format version of the 3.2 core profile. The problem i am encountering is that the OpenGL version reported by the QGLContext remains 1.0, no matter what GLFormat I specify.
After researching the topic i found the Qt OpenGL Core Profile Tutorial. However the example source code reports the same OpenGL version 1.0 from before. Curiously the call
qDebug() << "Widget OpenGl: " << format().majorVersion() << "." << format().minorVersion();
qDebug() << "Context valid: " << context()->isValid();
qDebug() << "Really used OpenGl: " << context()->format().majorVersion() << "." << context()->format().minorVersion();
qDebug() << "OpenGl information: VENDOR: " << (const char*)glGetString(GL_VENDOR);
qDebug() << " RENDERDER: " << (const char*)glGetString(GL_RENDERER);
qDebug() << " VERSION: " << (const char*)glGetString(GL_VERSION);
qDebug() << " GLSL VERSION: " << (const char*)glGetString(GL_SHADING_LANGUAGE_VERSION);
reported a version string of 2.1
Widget OpenGl: 1 . 0
Context valid: true
Really used OpenGl: 1 . 0
OpenGl information: VENDOR: NVIDIA Corporation
RENDERDER: NVIDIA GeForce GT 750M OpenGL Engine
VERSION: 2.1 NVIDIA-8.26.26 310.40.45f01
GLSL VERSION: 1.20
Using the Cocoa code suggested in this OS X opengl context discussion from 2011 the output of the version numbers changed to
Widget OpenGl: 1 . 0
Context valid: true
Really used OpenGl: 1 . 0
OpenGl information: VENDOR: NVIDIA Corporation
RENDERDER: NVIDIA GeForce GT 750M OpenGL Engine
VERSION: 4.1 NVIDIA-8.26.26 310.40.45f01
GLSL VERSION: 4.10
While the driver is now reporting expected OpenGL version number, i am still only able to get a 1.0 QGLWidget context. The QGLFormat object that is passed to the QGLWidget constructor is set up using
QGLFormat fmt;
fmt.setProfile(QGLFormat::CoreProfile);
fmt.setVersion(3, 2);
fmt.setSampleBuffers(true);
I am somewhat at a loss as to why i am still only getting a version 1.0 context. Even without the Cocoa framework generated OpenGL Context it should be possible to increase the context version to 2.1, but it remains fixed at 1.0 regardless of the QGLFormat passed to the constructor.
Any pointers as to why the QGLWidget Context remains at version 1.0 are very much appreciated.
Update 1
Further experimentation showed that the code returns the requested OpenGL version on a Ubuntu 13.04 Linux. The issue seems to be specific to OS X.
Update 2
I build a minimal non-/working example
#include <QtOpenGL/QGLFormat>
#include <QtOpenGL/QGLWidget>
#include <QtGui/QApplication>
#include <QtCore/QDebug>
int main(int argc, char **argv) {
QApplication app(argc, argv);
QGLFormat fmt = QGLFormat::defaultFormat();
fmt.setVersion(3,2);
fmt.setProfile(QGLFormat::CoreProfile);
fmt.setSampleBuffers(true);
QGLWidget c(fmt);
c.show();
qDebug() << c.context()->requestedFormat();
qDebug() << c.context()->format();
return app.exec();
}
which can be build in Ubuntu using
g++ main.cpp -I/usr/include/qt4 -lQtGui -lQtCore -lQtOpenGL -lGL -o test
or under OS X
g++ main.cpp -framework OpenGL -framework QtGui -framework QtCore -framework QtOpenGL -o test
It prints two lines of QGLFormat debug output. The first is the requested format and the second line is the actual context format. Both are supposed to show a major.minor version number of 3.2. It seems to be working under Ubuntu Linux, but fails when using OS X.
Update 3
Fun times. It might be a bug in Qt4.8.6, since the issue does not occur when compiling the example agains Qt5.3.1. A bug report has been filed.
Can someone else verify this behaviour?
Yes. That's platform specific. Please find solution here.
Override QGLContex::chooseMacVisual to specify platform specific initialization.
CustomGLContext.hpp:
#ifdef Q_WS_MAC
void* select_3_2_mac_visual(GDHandle handle);
#endif // Q_WS_MAC
class CustomGLContext : public QGlContext {
...
#ifdef Q_WS_MAC
void* chooseMacVisual(GDHandle handle) override {
return select_3_2_mac_visual(handle); // call cocoa code
}
#endif // Q_WS_MAC
};
gl_mac_specific.mm:
void* select_3_2_mac_visual(GDHandle handle)
{
static const int Max = 40;
NSOpenGLPixelFormatAttribute attribs[Max];
int cnt = 0;
attribs[cnt++] = NSOpenGLPFAOpenGLProfile;
attribs[cnt++] = NSOpenGLProfileVersion3_2Core;
attribs[cnt++] = NSOpenGLPFADoubleBuffer;
attribs[cnt++] = NSOpenGLPFADepthSize;
attribs[cnt++] = (NSOpenGLPixelFormatAttribute)16;
attribs[cnt] = 0;
Q_ASSERT(cnt < Max);
return [[NSOpenGLPixelFormat alloc] initWithAttributes:attribs];
}
I am finding that QGLShaderProgram is consistently failing to compile any shader and providing no error log. Here are the symptoms:
QGLShaderProgram reports that it failed to compile but produces an empty error log. If I try to bind the shader an exception is thrown.
I can compile a shader using glCompileShader without problem. However, the first time I try to compile this way after QGLShaderProgram has failed, fails with this error log:
ERROR: error(#270) Internal error: Wrong symbol table level
ERROR: 0:2: error(#232) Function declarations cannot occur inside of functions:
main
ERROR: error(#273) 2 compilation errors. No code generated
Following that one failure, the next time I try to compile using glCompileShader works fine.
The problem has arisen only since upgrading from Qt 4.8 to 5.2. Nothing else has changed on this machine.
I have tested on two PCs, one with an ATI Radeon HD 5700, the other with an AMD FirePro V7900. The problem only appears on the Radeon PC.
Here is my test code demonstrating the problem:
main.cpp
#include <QApplication>
#include "Test.h"
int main(int argc, char* argv[])
{
QApplication* app = new QApplication(argc, argv);
Drawer* drawer = new Drawer;
return app->exec();
}
Test.h
#pragma once
#include <qobject>
#include <QTimer>
#include <QWindow>
#include <QOpenGLContext>
#include <QOpenGLFunctions>
class Drawer : public QWindow, protected QOpenGLFunctions
{
Q_OBJECT;
public:
Drawer();
QTimer* mTimer;
QOpenGLContext* mContext;
int frame;
public Q_SLOTS:
void draw();
};
Test.cpp
#include "Test.h"
#include <QGLShaderProgram>
#include <iostream>
#include <ostream>
using namespace std;
Drawer::Drawer()
: mTimer(new QTimer)
, mContext(new QOpenGLContext)
, frame(0)
{
mContext->create();
setSurfaceType(OpenGLSurface);
mTimer->setInterval(40);
connect(mTimer, SIGNAL(timeout()), this, SLOT(draw()));
mTimer->start();
show();
}
const char* vertex = "#version 110 \n void main() { gl_Position = gl_Vertex; }";
const char* fragment = "#version 110 \n void main() { gl_FragColor = vec4(0.0,0.0,0.0,0.0); }";
void Drawer::draw()
{
mContext->makeCurrent(this);
if (frame==0) {
initializeOpenGLFunctions();
}
// Compile using QGLShaderProgram. This always fails
if (frame < 5)
{
QGLShaderProgram* prog = new QGLShaderProgram;
bool f = prog->addShaderFromSourceCode(QGLShader::Fragment, fragment);
cout << "fragment "<<f<<endl;
bool v = prog->addShaderFromSourceCode(QGLShader::Vertex, vertex);
cout << "vertex "<<v<<endl;
bool link = prog->link();
cout << "link "<<link<<endl;
}
// Manual compile using OpenGL direct. This works except for the first time it
// follows the above block
{
GLuint prog = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(prog, 1, &fragment, 0);
glCompileShader(prog);
GLint success = 0;
glGetShaderiv(prog, GL_COMPILE_STATUS, &success);
GLint logSize = 0;
glGetShaderiv(prog, GL_INFO_LOG_LENGTH, &logSize);
GLchar* log = new char[8192];
glGetShaderInfoLog(prog, 8192, 0, log);
cout << "manual compile " << success << endl << log << endl;
delete[] log;
}
glClearColor(1,1,0,1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
mContext->swapBuffers(this);
frame++;
}
Elsewhere, I have tested using QGLWidget, and on a project that uses GLEW instead of QOpenGLFunctions with exactly the same results.
The version of Qt I'm linking against was built with the following configuration:
configure -developer-build -opensource -nomake examples -nomake tests -mp -opengl desktop -icu -confirm-license
Any suggestions? Or shall I just send this in as a bug report?
Update
In response to peppe's comments:
1) What does QOpenGLDebugLogger says?
The only thing I can get from QOpenGLDebugLogger is
QWindowsGLContext::getProcAddress: Unable to resolve 'glGetPointerv'
This is printed when I initialize it (and not as a debug event firing, but just to console). It happens even though mContext->hasExtension(QByteArrayLiteral("GL_KHR_debug")) returns true and I'm initializing it within the first frame's draw() function.
2) Can you print the compile log of the QOGLShaders even if they compile successfully?
I cannot successfully compile QOpenGLShader or QGLShader at any point so I'm not able to test this. However, when compiling successfully using plain GL functions, the log returns blank.
3) Which GL version did you get from the context? (Check with QSurfaceFormat).
I've tried with versions 3.0, 3.2, 4.2, all with the same result.
4) Please set the same QSurfaceFormat on both the context and the window before creating them
5) Remember to create() the window
I've implemented both of these now and the result is the same.
I've just tested on a third PC and that has no issues. So it is this specific computer which, incidentally, happens to be a Mac Pro running Windows in bootcamp. It has had absolutely no trouble in any other context running the latest ATI drivers but I can only really conclude that there is a bug somewhere between the ATI drivers, this computer's graphics chip and QOpenGLShaderProgram.
I think I'm unlikely to find a solution, so giving up. Thank you for all your input!
I am following some tutorials and came up with the following code:
// rendering.cpp
#include "rendering.h"
#include <GL/gl.h>
#include <GL/freeglut.h>
void DrawGLScene()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
}
int InitGL(int argc, char** argv)
{
/*glShadeModel(GL_SMOOTH);
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClearDepth(1.0f);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);*/
glutInit(&argc, argv);
glutInitWindowSize(500, 500);
glutInitWindowPosition(100, 100);
glutDisplayFunc(DrawGLScene);
glutCreateWindow("Swimming Simulation");
glutMainLoop(); // Enter GLUT's main loop
return true;
}
My main function is very simple and only calls that function:
#include "rendering.h"
int main(int argc, char** argv)
{
InitGL(argc, argv);
return 0;
}
I am compiling with this command:
g++ -Wall -g swim.cpp rendering.cpp -lglut -lGLU -o swim
Running swim creates a window as expected. However, if I uncomment the lines in InitGL, then I get a segmentation fault when running the program:
(gdb) r
Starting program: <dir>
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Program received signal SIGSEGV, Segmentation fault.
0x000000335ca52ca7 in glShadeModel () from /usr/lib64/libGL.so.1
Missing separate debuginfos, use: debuginfo-install freeglut-2.6.0-6.fc15.x86_64 glibc-2.14.90-24.fc16.6.x86_64 libX11-1.4.3-1.fc16.x86_64 libXau-1.0.6-2.fc15.x86_64 libXdamage-1.1.3-2.fc15.x86_64 libXext-1.3.0-1.fc16.x86_64 libXfixes-5.0-1.fc16.x86_64 libXi-1.4.5-1.fc16.x86_64 libXxf86vm-1.1.1-2.fc15.x86_64 libdrm-2.4.33-1.fc16.x86_64 libgcc-4.6.3-2.fc16.x86_64 libstdc++-4.6.3-2.fc16.x86_64 libxcb-1.7-3.fc16.x86_64 mesa-libGL-7.11.2-3.fc16.x86_64 mesa-libGLU-7.11.2-3.fc16.x86_64
(gdb) backtrace
#0 0x000000335ca52ca7 in glShadeModel () from /usr/lib64/libGL.so.1
#1 0x0000000000401d67 in InitGL (argc=1, argv=0x7fffffffe198)
at rendering.cpp:25
#2 0x0000000000401c8c in main (argc=1, argv=0x7fffffffe198) at swim.cpp:37
What should I be doing here to get my program to run without crashing?
You fell into a tricky pitfall of GLUT. GLUT is sort of a state machine like OpenGL (it's not part of OpenGL). And the callback functions must be set after creating or selecting a window. In your case move the call of glutDisplayFunc (and any other callback setters) after the call of glutCreateWindow.
Get rid of glut and use something better like GLFW
and also a lot of those functions are deprecated so use a modern tutorial like
http://www.opengl-tutorial.org/
or
http://ogldev.atspace.co.uk/
OpenGL functions can be called only when there is OpenGL context - after glutCreateWindow function call if you use GLUT.
But they shouldn't crash the application though...