glutMainLoopEvent function cause memory leak - opengl

I have a main function in a single main.cpp. Basically, it first call the update function to update command for rendering and then calling the rendering function to render the scene. The rendering function is in another single cpp files.
In order to prevent glutMainLoop() function from blocking updating command in the main function, I use glutMainLoopEvent() in freeglut package as instead.
In my rendering function, the code
glmDraw(Model, GLM_SMOOTH|GLM_TEXTURE|GLM_MATERIAL);
is used to render the scene. If I use glutMainLoop(), this code above will be only executed only once in rendering function. However, when I use glutMainLoopEvent() function, this code will be executed again and again and cause the memory leak problem.
Any suggestion for correcting it?

Memory leak would be somewhere in your code. Double check that all memory you have allocated is getting deallocated properly in your render function. Its your code getting called again and again and leaking memory not glut.

Related

GLFW how to ensure glfwTerminate is called when the program exits?

In GLFW documentation it is written that glfwTerminate will
This will destroy any remaining window, monitor and cursor objects, restore any modified gamma ramps, re-enable the screensaver if it had been disabled and free any other resources allocated by GLFW.
and that one should call it before terminating the program. From my understanding that means, that if this function is not called the operating system doesn't have to re-enable the screensaver or restore modified gamma ramps, which is bad. How do I ensure that it is called regardless of how the program ends?
It's possible to use std::atexit to ensure it is called at the end if the program is exited via the exit command or by returning from the main function. It is also possible to do that making an object with a destructor in the main function that terminates when it is destroyed. The problem is what to do when the program ends with a signal. It's not possible to just register a function using std::signal, because glfwTerminate calls standard library functions other than the ones listed in https://en.cppreference.com/w/cpp/utility/program/signal which the site says is undefined behavior.
How do I ensure the program calls glfwTerminate? Or is it just not possible? And do I understand it correctly that without it the program can leave a modified gamma ramp after getting a signal? And are there any other ways the program can stop without calling the function?
There is no guarantee that the program will call glfwTerminate in case of segfaults or other undefined errors, so it is better to handle all errors before you decide to close it. So, recommendation is calling before the program terminates to restore the modified gamma ramps and re-enable the screensave.

What happens when a library is initialized?

I'm using GLFW and am getting confused by the logic of using libraries. According to my understanding of functions, when a function is called, the code within is executed and any memory allocated by said function is deallocated at the end of it. Although when calling glfwInit() for example, this allows for the subsequent use of other GLFW functions. My intuition is that this function call would have no effect on the code after it, so how is this happening?

Waiting for GLContext to be released

Was passed a set of rendering library that is coded with OSG library and run on Window Environment.
In my program, the renderer exists as a member object in my base class in C++. In my class initiation function, I would do all the neccessary steps to initialize the renderer and use the function this renderer class provide accordingly.
However, I have tried to delete my base class, I presumed the renderer member object would be destroyed along with it. However, when I created another instance of the class, the program would crash when I try to access the rendering function within the renderer.
Have enquired about some opinions on this matter and was told that in Windows, upon deleting the class, the renderer would need to release its glContext and this might be indeterminant time in Windows environment pending upon hardware setup
Is this so? If so, what steps could I take beside amending the rendering source code(if I could get it) to resolve the issue?
Thanks
Actually not deleting / releasing the OpenGL context will just create some memory leak but nothing more. Leaving the OpenGL context around should not cause a crash. In fact crashes like yours are often the cause of releasing some object, that's still required by some other part of the program, so not releasing stuff should not be a cause for a crash like yours.
Your issue is looking more like screwed constructor/destructor or operator= then a GL issue.
its just a gues without the actual code to see/test
Most likely you are accessing already deleted pointer somewhere
check all dynamic member variables and pointers inside your class
Had similar problems in the past so check these
trace back pointers in C++
bds 2006 C hidden memory manager conflicts (class new / delete[] vs. AnsiString)
I recommend to take a look at the second link
especially mine own answer, there is nice example of screwed constructor there
Another possible cause
if you are mixing window message code with threads
and accessing visual system calls or objects within threads instead of window code
that can screw up something in the OS and create unexpected crashes ...
at least on windows

C++ TinyThread and OpenGL with FreeGLUT

The problem I've encountered is generated by the probable misuse of threading within a simplistic FreeGLUT C++ application.
Since exiting the glutMainLoop() cannot be done elegantly, I rely on glutLeaveMainLoop() to do some of the work, but this one doesn't really give the control back to the program's main function. I have a global pointer that will be set in the main function, prior to the GL calls, to an object instance created on the heap. Without using the atexit callback, nobody will call the delete operator on this instance, although I placed such an operation after the glutMainLoop call (now commented because of its futility).
Here's a pseudocode for what I'm doing (I won't post the actual code since it's too long to filter):
CWorld* g_world;
AtExit()
{
delete g_world;
}
void main()
{
atexit(AtExit);
// create an instance of an object that creates a thread in its constructor
// via the new operator and joins this thread in its destructor
// calling the delete operator on that pointer to the thread instance
CWidget thisObjectCreatesATinyThread;
g_world = new CWorld();
...
glutMainLoop();
// delete g_world;
}
Note that in the main function, I also included a widget instance that does some work via a thread created in its constructor. This thread is joined in its destructor, then the memory is deallocated via a delete.
The wrong behaviour: without setting the atexit callback, I get a resource leak, since the destructor of the CWorld object won't get called. If I set this callback, then the delete operator gets called twice for some reason, even though the AtExit function is called only once.
What is the place to look for the source of this odd behaviour?
Even if I disable the CWidget instantiation, I still get the peculiar behaviuor.
I assume you're not using the original GLUT library (since it's ancient) but rather FreeGLUT, which is the most wide-spread GLUT implementation. In order to have glutMainLoop() return, you would do:
glutSetOption(GLUT_ACTION_ON_WINDOW_CLOSE, GLUT_ACTION_CONTINUE_EXECUTION);
before calling glutMainLoop(). This will cause it to return if there are no more active top-level windows left when you call glutLeaveMainLoop(). If you don't care about still active windows, instead do:
glutSetOption(GLUT_ACTION_ON_WINDOW_CLOSE, GLUT_ACTION_GLUTMAINLOOP_RETURNS);
You probably have to include <GL/freeglut.h> instead of <GL/glut.h> in order to get the definitions.

In SDL, does SDL_Quit() free every surface?

Basically, on surfaces that are going to exist right until the program terminates, do I need to run SDL_FreeSurface() for each of them, or would SDL_Quit() take care of all this for me?
I ask mainly because the pointers to a number of my surfaces are class members, and therefore I would need to keep track of each class instance (in a global array or something) if I wanted to run SDL_FreeSurface() on each of their respective surfaces. If SDL_Quit() will do it all in one fell swoop for me, I'd much rather go with that :D
I checked out the SDL 1.2.15 source code to see what actually happens when SDL_Quit is called. Gemini14's answer is correct: SDL_Quit will only free the main SDL_Surface returned by SDL_SetVideoMode.
Here's why:
SDLQuit calls SDLQuitSubSystem to quit every subsystem
SDLQuitSubSystem will call several subsystem quit functions
In particular, SDL_VideoQuit is called.
SDL_VideoQuit first checks if the static global pointer current_video is not NULL.
If current_video is not NULL, the function precedes to clean up several global variables.
SDL_FreeSurface is called on SDL_ShadowSurface or SDL_VideoSurface
SDL_ShadowSurface or SDL_VideoSurface is initialized and returned from SDL_SetVideoMode
Since SDL_FreeSurface is only called on the main SDL_Surface initialized by SDL_SetVideoMode, we can reason that all other SDL_Surface variables allocated memory are not freed with a call to SDL_Quit, and must therefore be freed with explicit calls to SDL_FreeSurface.
However, since generally for all programs the operating system will free the memory automatically when the program ends, freeing the SDL_Surface variables is only a concern if your program continues after SDL_Quit.
It's been a while since I used SDL, but I'm pretty sure SDL_Quit just cleans up the screen surface (the main screen buffer that you set up at the beginning). You have to free the other surfaces you create manually or you get leaks. Of course, since they're already class members, one way to do that easily would be to just free them up in the class destructor.
It is best practise to clear all your surfaces that you know you are using with SDL_FreeSurface().
Similarly, if you create an array of pointers that all call malloc and therefore take up heap space, exiting the program will not clear up all the used space on every system.
int **memspots[1024];
for (i = 0; i < 1024; i++) {
memspots[i] = malloc(1 * sizeof(int *)); // 1024 pointers to ints stored in heap memory
}
At the end of your application, you would definitely want to call free in a similar fashion.
for (i = 0; i < 1024; i++) {
free(memspots[i]);
}
It is only best practise to free any memory used any time you can whenever possible, whether at run time and of course at exit.
My GL texture function for SDL temporarily uses an SDL_Surface to gather some image data (taken from SDL_image) and has this at the end:
if (surface != NULL) // Will be NULL if everything failed and SOMEHOW managed to get here
SDL_FreeSurface();
return;