Valid OpenGL context - opengl

How and at what stage is a valid OpenGL context created in my code? I'm getting errors on even simple OpenGL code.

From the posts on comp.graphics.api.opengl, it seems like most newbies burn their hands on their first OpenGL program. In most cases, the error is caused due to OpenGL functions being called even before a valid OpenGL context is created. OpenGL is a state machine. Only after the machine has been started and humming in the ready state, can it be put to work.
Here is some simple code to create a valid OpenGL context:
#include <stdlib.h>
#include <GL/glut.h>
// Window attributes
static const unsigned int WIN_POS_X = 30;
static const unsigned int WIN_POS_Y = WIN_POS_X;
static const unsigned int WIN_WIDTH = 512;
static const unsigned int WIN_HEIGHT = WIN_WIDTH;
void glInit(int, char **);
int main(int argc, char * argv[])
{
// Initialize OpenGL
glInit(argc, argv);
// A valid OpenGL context has been created.
// You can call OpenGL functions from here on.
glutMainLoop();
return 0;
}
void glInit(int argc, char ** argv)
{
// Initialize GLUT
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE);
glutInitWindowPosition(WIN_POS_X, WIN_POS_Y);
glutInitWindowSize(WIN_WIDTH, WIN_HEIGHT);
glutCreateWindow("Hello OpenGL!");
return;
}
Note:
The call of interest here is glutCreateWindow(). It not only creates a window, but also creates an OpenGL context.
The window created with glutCreateWindow() is not visible until glutMainLoop() is called.

Related

Calling std::thread() around a function works differently

Is there any reason why this code here:
int main(int argc, char* argv[])
{
Main::Init();
std::thread worker(Main::Mainloop);
worker.join();
Main::Free();
return 0;
}
should work differently to this code here:
int main(int argc, char* argv[])
{
Main::Init();
Main::Mainloop();
Main::Free();
return 0;
}
Noting that the Main class is defined as a singleton, here is the code:
main.h
#pragma once
#ifndef MAIN_H
#define MAIN_H
#include "window.h"
#include "mainloop.h"
class Main ///Singleton
{
public:
Main(const Main&) = delete;
Main(Main&&) = delete;
Main& operator=(const Main&) = delete;
Main& operator=(Main&&) = delete;
private:
Main();
static Main& Get_Instance();
friend int main(int argc, char* argv[]);
static void Mainloop();
static void Init();
static void Free();
};
#endif // MAIN_H
The first example above fails to initialise one ofGLFW, GLEW, and ImGui which are what I am using for my program. I was trying to split up the initialisation of the program but then i ran into this issue. When I dug farther I reached this point which doesn't really make any sense why it shouldn't work. Basically it either throws an exception or ImGui spams me with many errors during runtime saying:
failed to compile vertex shader!
failed to compile fragment shader!
failed to link shader program! (with GLSL `#version 460`)
yet the window opens up and I only get these during runtime with the thread example. Not with the other one.
All of those libraries in some way interact with OpenGL and therefore are very sensitive to what thread they're being executed on. The current OpenGL context is thread-specific; each thread has its own current context, and a context can only be current within one thread at any time.
Creating a GLFW window creates an OpenGL context. If you then switch to another thread, that context will not be current in that thread unless you tell GLFW to make it current in that thread.

Is there any way to make really sure QSplashScreen has been repainted on the screen?

I have a problem that on Linux with Xorg (Ubuntu 14.04) and Qt 5.5.1 QSplashScreen isn't painted until I get to the event loop. Even if I call QApplication::processEvents() multiple times, it still isn't painted, even after 1000 calls, although the window is already on the screen, retaining the original pixels which were there before the app launched, thus being effectively invisible*. From this answer I got an idea of using a timed loop of calling QApplication::processEvents(), like here:
#include <QThread>
#include <QApplication>
#include <QSplashScreen>
int main(int argc, char** argv)
{
QApplication a(argc,argv);
QSplashScreen splash;
splash.show();
splash.showMessage("Loading...");
// The hack to try to ensure that splash screen is repainted
for(int i=0;i<30;++i)
{
QThread::usleep(1e3);
a.processEvents();
}
QThread::usleep(5e6); // simulate slow loading process
splash.showMessage("Finished");
return a.exec();
}
The above code actively sleeps for 30 ms in an attempt to make QSplashScreen repaint. This works for me, but I'm not sure that it'll always work e.g. on a busy/slow CPU or in whatever other conditions (the magic value of 30 iterations was found empirically).
Another, quite intrusive in general, way would be to do all the necessary loading in another thread, only to make sure that QSplashScreen in the main thread does have an active message queue. Due to the need to considerably redo the main program, this looks not too good of a solution.
So, is there any way to make sure that QSplashScreen has been repainted, so that its window doesn't contain garbage, and only then to proceed with long blocking loading process?
* I discovered this when I moved a window behind the splash screen
One way to avoid the unknowable a priori magic timeout is to wait for an exact event: the paint event. On X11 it appears to come with a delay. To do this waiting we'll have to subclass QSplashScreen and override QSplashScreen::paintEvent(), like here:
#include <QThread>
#include <QApplication>
#include <QSplashScreen>
class MySplashScreen : public QSplashScreen
{
bool painted=false;
void paintEvent(QPaintEvent* e) override
{
QSplashScreen::paintEvent(e);
painted=true;
}
public:
void ensureFirstPaint() const
{
while(!painted)
{
QThread::usleep(1e3);
qApp->processEvents();
}
}
};
int main(int argc, char** argv)
{
QApplication a(argc,argv);
MySplashScreen splash;
splash.show();
splash.showMessage("Loading...");
splash.ensureFirstPaint();
QThread::usleep(5e6); // simulate slow loading process
splash.showMessage("Finished");
return a.exec();
}
The solution is rather simple: keep the event loop running until the window is repainted. This should be done without any spinning, i.e. you shouldn't be using any explicit timeouts.
#include <QtWidgets>
class EventSignaler : public QObject {
Q_OBJECT
QEvent::Type m_type;
protected:
bool eventFilter(QObject *src, QEvent *ev) override {
if (ev->type() == m_type)
emit hasEvent(src);
return false;
}
public:
EventSignaler(QEvent::Type type, QObject *object) :
QObject(object), m_type(type) {
object->installEventFilter(this);
}
Q_SIGNAL void hasEvent(QObject *);
};
int execUntilPainted(QWidget *widget) {
EventSignaler painted{QEvent::paint, widget};
QObject::connect(&painted, &EventSignaler::hasEvent, qApp, &QCoreApplication::quit);
return qApp->exec();
}
int main(int argc, char **argv) {
QApplication app{argc, argv};
MySplashScreen splash;
EventSignaler painted{QEvent::Paint, &splash};
splash.show();
splash.showMessage("Loading...");
execUntilPainted(&splash);
QThread::sleep(5); // simulate slow loading process
splash.showMessage("Finished");
return app.exec();
}
#include "main.moc"

Qt Segmentation fault at exec()

I have a very strange problem while trying to run a QProcess in a class HmiApplication, which is derived from QApplication.
The application throws a SIGSEGV in line 6 of main.cpp. This occurs only if line 11 of hmiapplication.cpp is commented out (If I don't qDebug() the stdout of the QProcess).
For the sake of simplicity and clarity, I didn't handle any return values while creating the QProcess.
main.cpp
#include "hmiapplication.h"
int main(int argc, char **argv)
{
HmiApplication hmi(argc, argv);
return hmi.exec(); // LINE 6 - SIGSEGV
}
hmiapplication.h
#ifndef HMIAPPLICATION_H
#define HMIAPPLICATION_H
#include <QApplication>
#include <QProcess>
class HmiApplication : public QApplication
{
Q_OBJECT
public:
HmiApplication(int argc, char **argv);
virtual ~HmiApplication();
private:
QProcess *macFinder = nullptr;
};
#endif // HMIAPPLICATION_H
hmiapplication.cpp
#include "hmiapplication.h"
HmiApplication::HmiApplication(int argc, char **argv) : QApplication(argc, argv)
{
macFinder = new QProcess(this);
macFinder->start("arping", QStringList() << "-c 2" << "192.168.1.1");
macFinder->waitForReadyRead();
QString ret(macFinder->readAllStandardOutput());
ret = ret.mid(ret.indexOf('[') + 1, 17);
qDebug() << ret; // LINE 11
}
HmiApplication::~HmiApplication()
{
}
EDIT:
If I add QVector<Camera*> cameras; to the header and
for(quint8 i = 0; i < 10; i++) {
Camera *cam = new Camera(i);
cameras.append(cam);
}
to the source file, it doesn't matter whether or not I remove the qDebug() line and will throw a segmentation fault in both cases.
Camera is a derived class of QLabel and is perfectly working without the QProcess mentionened above.
The QApplication constructor accepts its first parameter by reference...
QApplication::QApplication(int &argc, char **argv)
With the documentation also warning that...
The data referred to by argc and argv must stay valid for the entire
lifetime of the QApplication object. In addition, argc must be greater
than zero and argv must contain at least one valid character string.
However, you pass argc by value to the HmiApplication. Hence the QApplication constructor receives a non-const reference to the local copy which will go out of scope at the end of the HmiApplication ctor leading to undefined behaviour later on.
Change the signature of your constructor to...
HmiApplication::HmiApplication(int &argc, char **argv)

QApplication Program Execution Segmentation

Could anybodyy help me with this problem? I am trying to run an application using the cmakefiles. on the main file of my program I get a segmentation fault when the program gets to the line of code to execute the QAppication. Here is the fragment code below:
int main(int argc, char** argv)
{
bool viewing;
parse_command_line( argc, argv );
#ifdef _GRAPHICS_
glutInit(&argc, argv); // note the code runs correctly when this line is excluded and the glutInit was initialized in another class named Viewer (See class Viewer instantiated below), however for my specific application I need to initialize the glutInit in the main program
#endif
if( viewing )
{
#ifdef _GRAPHICS_
QApplication application(argc, argv);
Viewer *viewer = new Viewer( 0, exp, argc, argv );
Interface *render = new Interface( 0, exp, viewer );
render->show();
return application.exec(); //this line causes the segmentation fault
delete viewer;
delete render;
#endif
}
}
When glutInit is called inside Viewer, application and viewer receive all the command line arguments. When you call it before, like you do, glutInit will eat all the parameters it understands, so the other objects might miss some arguments.
Possible solutions: do the glutInit (after application creation), or make a copy of argc/argv.

libgit2 and Qt error

I want to use git from an Qt application. So far, I use QProcess, but I do not want to use that. So I found libgit2.
This works as expected:
#include <QApplication>
#include "git2.h"
int main(int argc, char* argv[])
{
git_repository* repo = 0;
git_clone(&repo, "/path_to/barerep", "/path_to/test_clone", NULL);
git_repository_free(repo);
repo = 0;
}
But here, git_clone crashes.
int main(int argc, char* argv[])
{
QApplication a(argc, argv);
git_repository* repo = 0;
git_clone(&repo, "/path_to/barerep", "/path_to/test_clone", NULL);
git_repository_free(repo);
repo = 0;
return a.exec();
}
The error is:
*** Error in `/path_to/gittest': free(): invalid pointer: 0x09d53a88 ***
Any suggestions? Of course, omitting QApplication is not an alternative. The same error occurs without return a.exec().
Note: Actually, there is a class GitRepository with a method clone(const QString & url) (the path is stored somewhere in the class).
Again, this works
int main(int argc, char* argv[])
{
GitRepository g;
g.clone("path_to/barerep");
}
But this does not. (QObject!)
int main(int argc, char* argv[])
{
QObject(); // <--
GitRepository g;
g.clone("path_to/barerep");
}
does not.
bool GitRepository::clone(const QString & url)
{
git_repository* repo = 0;
git_clone(&repo, CSTR(url), CSTR(path()), NULL);
git_repository_free(repo);
repo = 0;
//loadFromTempDir();
return true;
}
Replacing QApplication by QObject in the first example suppresses the error.
You need to call git_libgit2_init() before calling any other libgit2 functions. As the documentation says:
This function must the called before any other libgit2 function in order to set up global state and threading.
These kind of errors are really hard to find. I also had problems mixing Qt libraries with other libraries. The trick is to organize your code in a way that there is only one library included in one compilation unit.
Create a class, which wraps around libgit2, and only include the libgit2 headers in the cpp file of this class. Do not include any qt headers in the same file.
Only refer to libgit throw your wrapper. Sure it seems like a lot of work, but as a result your code will be cleaner, and these misterious errors will be gone.