How to multithreading glfw keyboard callback? - c++

I am creating an opengl application and it works well when I didn't use multithreading. The original code is shown below:
void Controller::BeginLoop()
{
while (!glfwWindowShouldClose(_windowHandle)) {
/* Render here */
Render();
/* Swap front and back buffers */
glfwSwapBuffers(_windowHandle);
/* Poll for and process events */
glfwPollEvents();
}
}
int main()
{
//do some initialization
g_controller->BeginLoop();
}
The above code works well, however, when I tried to put the eventpolling and rendering into two different threads, the OpenGL won't draw anything in the window. Below is the multithread code I used:
void Controller::BeginLoop()
{
while (!glfwWindowShouldClose(_windowHandle)) {
glfwMakeContextCurrent(_windowHandle);
/* Render here */
Render();
/* Swap front and back buffers */
glfwSwapBuffers(_windowHandle);
}
}
void Render(int argc, char **argv)
{
::g_controller->BeginLoop();
}
int main()
{
std::thread renderThread(Render, argc, argv);
while (true) {
glfwPollEvents();
}
renderThread.join();
return 0;
}
In the Render function, I do some physics and draw the result points onto the window.
I totally have no idea what is going wrong.

After creating a GLFW window the OpenGL context created by this will be made current in the thread that did create the window. Before you can make an OpenGL context current in another thread is must be release (made un-current) in the thread currently holding it. So the thread holding the context must call glfwMakeContextCurrent(NULL) before the new thread is calling glfwMakeCurrent(windowHandle) – either before launching the new thread or by using a synchronization object (mutex, semaphore).
BTW: Symbols starting with an underscore _ are reserved for the compiler on the global namespace, so either make sure _windowHandle is a class member variable or use underscored symbols only for function parameters.

Related

What is the proper way of using multiple context for multithreading in OpenGL?

I am trying to parallelize a program I have made in OpenGL. I have fully tested the single threaded version of my code and it works. I ran it with valgrind and things were fine, no errors and no memory leaks, and the code behaved exactly as expected in all tests I managed to do.
In the single threaded version, I am sending a bunch of cubes to be rendered. I do this by creating the cubes in a data structure called "world", sending the OpenGL information to another structure called "Renderer" by appending them to a stack, and then finally I iterate through the queue and render every object.
Since the single threaded version works I think my issue is that I am not using the multiple OpenGL contexts properly.
These are the 3 functions that pipeline my entire process:
The main function, which initializes the global structures and threads.
int main(int argc, char **argv)
{
//Init OpenGL
GLFWwindow* window = create_context();
Rendering_Handler = new Renderer();
int width, height;
glfwGetWindowSize(window, &width, &height);
Rendering_Handler->set_camera(new Camera(mat3(1),
vec3(5*CHUNK_DIMS,5*CHUNK_DIMS,2*CHUNK_DIMS), width, height));
thread world_thread(world_handling, window);
//Render loop
render_loop(window);
//cleanup
world_thread.join();
end_rendering(window);
}
The world handling, which should run as it's own thread:
void world_handling(GLFWwindow* window)
{
GLFWwindow* inv_window = create_inv_context(window);
glfwMakeContextCurrent(inv_window);
World c = World();
//TODO: this is temprorary, implement this correctly
loadTexture(Rendering_Handler->current_program, *(Cube::textures[0]));
while (!glfwWindowShouldClose(window))
{
c.center_frame(ivec3(Rendering_Handler->cam->getPosition()));
c.send_render_data(Rendering_Handler);
openGLerror();
}
}
And the render loop, which runs in the main thread:
void render_loop(GLFWwindow* window)
{
//Set default OpenGL values for rendering
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glPointSize(10.f);
//World c = World();
//loadTexture(Rendering_Handler->current_program, *(Cube::textures[0]));
while (!glfwWindowShouldClose(window))
{
glfwPollEvents();
Rendering_Handler->update(window);
//c.center_frame(ivec3(Rendering_Handler->cam->getPosition()));
//c.send_render_data(Rendering_Handler);
Rendering_Handler->render();
openGLerror();
}
}
Notice the comments on the third function, if I uncomment those out and then comment out the multi-threading statemnts on the main function (i.e single thread my program) everything works.
I don't think this is caused by a race condition, because the queue, where the OpenGL info is being put before rendering, is always locked before being used (i.e whenever a thread needs to read or write to the queue, the thread locks a mutex, reads or writes to the queue, then unlocks the mutex).
Does anybody have an intuition on what I could be doing wrong? Is it the OpenGL context?

C++ Visual Studio Release build unused code crash

I have a question which is quite general, but I hope someone will be able to at least point me in the right direction.
I created my project a I was building it only in Debug mode with /MDd flag.
But it started to have perfomance issues, so I wanted to try it in Release mode to see, how it goes.
Problem is, that when I use /MD or /MT flag and Release mode my application instantly crashes.
So I tried to find out why. It works fine in Debug. I've tried some code changes, but nothing helped. So I decided to make my app just start and comment out rest of my code. But it was still crashing. Even when my code was unused. It didn't crash only, when I completly removed those unused parts of code.
I think it's something with variable inicialization/declaration, but I'm not quite sure what I should look for.
Could someone suggest me what can cause application to crash even if it's just Declaration/Inicialization and is not even used in RunTime?
I hope you can somehow understand what is my problem.
Thanks for any suggestions!
EDIT: Code which crashes, when unused code is in project, but does not crash when i remove unused code.
#include "core/oxygine.h"
#include "Stage.h"
#include "DebugActor.h"
//#include "Galatex.h"
using namespace oxygine;
//called each frame
int mainloop()
{
//galatex_update();
//update our stage
//update all actors. Actor::update would be called also for all children
getStage()->update();
if (core::beginRendering())
{
Color clearColor(32, 32, 32, 255);
Rect viewport(Point(0, 0), core::getDisplaySize());
//render all actors. Actor::render would be called also for all children
getStage()->render(clearColor, viewport);
core::swapDisplayBuffers();
}
//update internal components
//all input events would be passed to Stage::instance.handleEvent
//if done is true then User requests quit from app.
bool done = core::update();
return done ? 1 : 0;
}
//it is application entry point
void run()
{
ObjectBase::__startTracingLeaks();
//initialize Oxygine's internal stuff
core::init_desc desc;
#if OXYGINE_SDL || OXYGINE_EMSCRIPTEN
//we could setup initial window size on SDL builds
desc.w = 1800;
desc.h = 1000;
//marmalade settings could be changed from emulator's menu
#endif
//galatex_preinit();
core::init(&desc);
//create Stage. Stage is a root node
Stage::instance = new Stage(true);
Point size = core::getDisplaySize();
getStage()->setSize(size);
//DebugActor is a helper actor node. It shows FPS, memory usage and other useful stuff
DebugActor::show();
//initialize this example stuff. see example.cpp
//galatex_init();
#ifdef EMSCRIPTEN
/*
if you build for Emscripten mainloop would be called automatically outside.
see emscripten_set_main_loop below
*/
return;
#endif
//here is main game loop
while (1)
{
int done = mainloop();
if (done)
break;
}
//user wants to leave application...
//lets dump all created objects into log
//all created and not freed resources would be displayed
ObjectBase::dumpCreatedObjects();
//lets cleanup everything right now and call ObjectBase::dumpObjects() again
//we need to free all allocated resources and delete all created actors
//all actors/sprites are smart pointer objects and actually you don't need it remove them by hands
//but now we want delete it by hands
//check example.cpp
//galatex_destroy();
//renderer.cleanup();
/**releases all internal components and Stage*/
core::release();
//dump list should be empty now
//we deleted everything and could be sure that there aren't any memory leaks
ObjectBase::dumpCreatedObjects();
ObjectBase::__stopTracingLeaks();
//end
}
#ifdef __S3E__
int main(int argc, char* argv[])
{
run();
return 0;
}
#endif
#ifdef OXYGINE_SDL
#include "SDL_main.h"
extern "C"
{
int main(int argc, char* argv[])
{
run();
return 0;
}
};
#endif
#ifdef EMSCRIPTEN
#include <emscripten.h>
void one() { mainloop(); }
int main(int argc, char* argv[])
{
run();
emscripten_set_main_loop(one, 0, 0);
return 0;
}
#endif
So I'll write it here for possibly other newbies like me which would find themselves in similar sutiation.
My problem was in Initialization of static and other variables which were "outside of function". For example:
MyObject object = new MyObject(); //This was the reason, why it was crashing, just had to move
// initialization of such variables to function which was called with object creation.
void MyClass::myFunction(){
object->doSomething();
}
So when program started inicialization of those variables caused crash of program.
Note: It seems like it was problem with objects, cause variables like Integers or such were just fine.
Well, I'm not totally sure why this is allowed in Debug mode, but crashes Release mode right after start, maybe someone could answer under this comment and explain this behavior, I'm just begginer and I'm doing lot of bad stuff, but I'm trying and that's good, right? :D
I hope i didn't waste too much of your time guys and maybe this post will be useful to someone in future.

use opengl in thread

I have a library, which is engaged in rendering on opengl and prinimaet streams from the network.
I write under a poppy, but plans to use on linux
so the window is created for objective c
I start drawing in a separate thread in the other receiving and decoding the data.
I crash bug (EXT_BAD_ACCESS) on methods of opengl, even if I use them only in a single thread.
my code
main glut:
int main(int argc, const char * argv[]){
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGBA);
int win = glutGetWindow();
glutInitWindowSize(800, 600);
glutCreateWindow("OpenGL lesson 1");
client_init(1280, 720, win, "192.168.0.98", 8000, 2222);
return 0;
}
or objective c
- (id)initWithFrame:(NSRect)frameRect pixelFormat:(NSOpenGLPixelFormat*)format{
self = [super initWithFrame:frameRect];
if (self != nil) {
NSOpenGLPixelFormatAttribute attributes[] = {
NSOpenGLPFANoRecovery,
NSOpenGLPFAFullScreen,
NSOpenGLPFAScreenMask,
CGDisplayIDToOpenGLDisplayMask(kCGDirectMainDisplay),
(NSOpenGLPixelFormatAttribute) 0
};
_pixelFormat = [[NSOpenGLPixelFormat alloc] initWithAttributes:attributes];
if (!_pixelFormat)
{
return nil;
}
//_pixelFormat = [format retain];
[[NSNotificationCenter defaultCenter] addObserver:self
selector:#selector(_surfaceNeedsUpdate:)
name:NSViewGlobalFrameDidChangeNotification
object:self];
_openGLContext = [self openGLContext];
client_init(1280, 720, win, "192.168.0.98", 8000, 2222);
}
return self;
}
client_init code
// pthread_create(&posixThreadID, NULL, (void*(*)(void*))ShowThread, dh_tmp);
pthread_create(&posixThreadID, NULL, (void*(*)(void*))ShowThread, NULL);
void* ShowThread(struct drawhandle * dh){
//glViewport(0, 0, dh->swidth, dh->sheight);//EXT_BAD_ACCESS
glViewport(0, 0, 1280, 720);//EXT_BAD_ACCESS
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
//gluOrtho2D(0, dh->swidth, 0, dh->sheight);
gluOrtho2D(0, 1280, 0, 720);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
...
return 0;
}
I think the problem is? that uncreated context opengl.
How to create it in macos / linux?
This thread has no current OpenGL context. Even if you did create a context earlier in the program (not visible in your snippet), it will not be current in the thread you launch.
An OpenGL context is always, with no exceptions, "current" for exactly one thread at a time. By default this is the thread that created the context. Any thread calling OpenGL must be made "current" first.
You must either create the context in this thread, or call glXMakeCurrent (Unix/Linux) or aglMakeCurrent (Mac) or wglMakeCurrent (Windows) inside ShowThread (before doing anything else related to OpenGL).
(probably not the reason for the crash, though... see datenwolf's answer for the likely reason of the crash -- nevertheless it's wrong)`
OpenGL and multithreading are on difficult terms. It can be done, but it requires some care. First and foremost, a OpenGL context can be active in only one thread at a time. And on some systems, like Windows extension function pointers are per context, so with different contexts in different threads you may end up with different extension function pointers, which must be provisioned for.
So there's problem number one: You've probably got no OpenGL context on this thread, but this should not crash on calling a non-extension function, it would just do nothing.
If it really crashes on the line you indicated, then the dh pointer is invalid for sure. It's the only explanation. A pointer in C is just some number that's interpreted in a special way. If you pass around pointers – especially if used as a parameter to a callback, or thread function – then the object to pointer points to must not go invalid until it's made sure this pointer can no longer be accessed. Which means: You must not use this on things you create on the stack, i.e. in C auto storage.
This will break:
void foo(void)
{
struct drawhandle dh_tmp;
pthread_create(&posixThreadID, NULL, (void*(*)(void*))ShowThread, &dh_tmp);
}
why? Because the moment foo returns the object dh_tmp goes invalid. But &dh_tmp (the pointer to it) is just a number and this number will not "magically" turn zero, the moment dh_tmp gets invalid.
You must allocate it on the heap for this to work. Of course there's the problem, when to free the memory again.

How to visualise a calculation running in another thread in real time with vtk

I would like to visualise a running calculation in another thread with the visualisation tool kit in real time. The calculation spits out a new set of values to be visualised each iteration and the graphical thread must some how know this and load the new values.
One way to do this would be to have the main thread poll the state of the calculation. Ideally I'd not like to do any polling but if there is no other way then I will.
The best way I can think of would be to have the the calculation thread push an event onto the main thread's event queue every iteration of the calculation which is then processes by the GUI. I'm not sure how to go about doing this, or if it can be done in a thread safe manner.
I'm using vtk in gcc/C++ on linux using pthreads.
Listen to the Modified event on the object you're interested in, in the main thread using a vtkCommand (or appropriate derived class). You can then update your renderer and associated classes when the callback occurs.
But many VTK classes aren't thread-safe. You'll need to pause updation while rendering occurs. Otherwise, it'll segfault while trying to read and write the same memory.
I think it is a standard way. Create separate thread for window handling (i.e. window messages processing), and sometime put data into window (i.e. update the image).
Similar procedure with MathGL looks like following (see How I can create FLTK/GLUT/Qt window in parallel with calculation?)
//-----------------------------------------------------------------------------
#include <mgl/mgl_fltk.h>
#include <pthread.h>
#include <unistd.h>
mglPoint pnt; // some global variable for changable data
//-----------------------------------------------------------------------------
int sample(mglGraph *gr, void *)
{
gr->Box(); gr->Line(mglPoint(),pnt,"Ar2"); // just draw a vector
return 0;
}
//-----------------------------------------------------------------------------
void *mgl_fltk_tmp(void *) { mglFlRun(); return 0; }
int main (int argc, char ** argv)
{
mglGraphFLTK gr;
gr.Window(argc,argv,sample,"test"); // create window
static pthread_t tmp;
pthread_create(&tmp, 0, mgl_fltk_tmp, 0);
pthread_detach(tmp); // run window handling in the separate thread
for(int i=0;i<10;i++) // do calculation
{
sleep(1); // which can be very long
pnt = mglPoint(2*mgl_rnd()-1,2*mgl_rnd()-1);
gr.Update(); // update window
}
return 0; // finish calculations and close the window
}
//-----------------------------------------------------------------------------

creating QT gui using a thread in c++?

I am trying to create this QT gui using a thread but no luck. Below is my code. Problem is gui never shows up.
/*INCLUDES HERE...
....
*/
using namespace std;
struct mainStruct {
int s_argc;
char ** s_argv;
};
typedef struct mainStruct mas;
void *guifunc(void * arg);
int main(int argc, char * argv[]) {
mas m;<br>
m.s_argc = argc;
m.s_argv = argv;
pthread_t threadGUI;
//start a new thread for gui
int result = pthread_create(&threadGUI, NULL, guifunc, (void *) &m);
if (result) {
printf("Error creating gui thread");
exit(0);
}
return 0;
}
void *guifunc(void * arg)
{
mas m = *(mas *)arg;
QApplication app(m.s_argc,m.s_argv);
//object instantiation
guiClass *gui = new guiClass();
//show gui
gui->show();
app.exec();
}
There appears to be two major issues here:
The GUI is not appearing because your main() function is completing after creating the thread, thus causing the process to exit straight away.
The GUI should be created on the main thread. Most frameworks require the GUI to be created, modified and executed on the main thread. You spawn threads to do work and send updates to the main thread, not the other way around.
Start with a regular application, based on the Qt sample code. If you use Qt Creator, it can provide a great deal of help and skeleton code to get you started. Then once you have a working GUI, you can start looking at adding worker threads if you need them. But you should do some research on multithreading issues, as there are many pitfalls for the unwary. Have fun!