Why would you set a shader program inside a game loop? - opengl

I'm going through an OpenGL tutorial and in the code examples the functions glGetUniformLocation and glUseProgram are called unconditionally inside of the main game loop.
It seems like it would be a waste to do this once a frame, and the program still behaves correctly after moving this logic to before the start of the game loop. Are there any reasons for keeping this logic inside the game loop?

GetUniformLocation needs only be called when the shader is initialized. glUseProgram might be useful if you later store which program is used in some variable.

Related

Advantage of a callback functions? (graphics)

I'm relatively new to writing input-based applications. I'm using OpenGL and GLFW to write games at the moment and user input is at the heart of it. From what I understand, OpenGL being a C program means that I cannot use classes to encapsulate the existing, predefined keycallback functions.
One way around this, is to use a self-defined callback as a global function or as a static method in a class. Link provides quite a few interesting ways to do this. Some of these are better than what I'm doing right now but my question remains- is it worth such an effort?
void glfwKeyCallBack(some parameters) // GLFW-defined
{ // does something}
void SelfDefinedFun(some parameters) // my definition of it
{ // does something}
int main()
{
while(gamelooptrue) // inside game loop here
{
1. SelfDefinedFun(some parameters) // call function here instead of using glfwKeyCallBack
2. action 2
3. action 3
...
}
}
Callback functions are automatically called the moment the corresponding action takes place. This means that regardless of how slow my game loop is, each time a key is pressed, glfwkeyCallBack will be called. I can vaguely see some of the advantages such a method provides as opposed to calling SelfDefinedFun repeatedly in the game loop.
However... the game loop requires that the drawing of a scene take place each frame. Given current standards, this is usually 60 times a second but I imagine we would need at least 15fps to have a non-jerky visualization of movement on a screen. Given that, I can forget about the callback and simply check for key-press every frame. The advantage here is that I can easily encapsulate the function and this allows me to perform more complex event-based operations.
So, why are callbacks even necessary if we are drawing the game so many times a second?
One reason I think callbacks could be better is because you don't check for user-actions unless there is one => more time for drawing. Since my application is simple, this is negligible time. But would it be different in full-blown 3D games like Skyrim, Bioshock etc.?

Vulkan hpp How to properly handle deletion of unique handles in global scope?

I am trying to create a cache of shader modules in vulkan to avoid recompiling/recreating them after certain events (e.g swapchain recreation).
The data structure looks like this:
enum class ShaderStage
{
VERTEX = 0,
FRAGMENT
};
typedef std::map<ShaderStage, vk::UniqueShaderModule> ModuleData;
static std::map<std::string, ModuleData> data_cache;
Or in other words a map from a string identifier to the set of shader modules associated with a program, and then a map from each shader stage (fragment, vertex ...).
The issue with this is that the destructor of those global objects won't be call until we return from main, which messes up the correct order of object destruction for vulkan, leading to a segmentation fault.
This can be corrected by adding this function to my class:
static void CleanCache() { data_cache.clear(); }
If this function is called prior to main exiting, there will be no segmentation fault.
However this forces the potential user of my code to necessarily clear the cache manually before exiting main, something I would rather avoid.
I wanted to know if there is a technique I can use to implicitly destroy the objects before terminating the program.

Waiting for GLContext to be released

Was passed a set of rendering library that is coded with OSG library and run on Window Environment.
In my program, the renderer exists as a member object in my base class in C++. In my class initiation function, I would do all the neccessary steps to initialize the renderer and use the function this renderer class provide accordingly.
However, I have tried to delete my base class, I presumed the renderer member object would be destroyed along with it. However, when I created another instance of the class, the program would crash when I try to access the rendering function within the renderer.
Have enquired about some opinions on this matter and was told that in Windows, upon deleting the class, the renderer would need to release its glContext and this might be indeterminant time in Windows environment pending upon hardware setup
Is this so? If so, what steps could I take beside amending the rendering source code(if I could get it) to resolve the issue?
Thanks
Actually not deleting / releasing the OpenGL context will just create some memory leak but nothing more. Leaving the OpenGL context around should not cause a crash. In fact crashes like yours are often the cause of releasing some object, that's still required by some other part of the program, so not releasing stuff should not be a cause for a crash like yours.
Your issue is looking more like screwed constructor/destructor or operator= then a GL issue.
its just a gues without the actual code to see/test
Most likely you are accessing already deleted pointer somewhere
check all dynamic member variables and pointers inside your class
Had similar problems in the past so check these
trace back pointers in C++
bds 2006 C hidden memory manager conflicts (class new / delete[] vs. AnsiString)
I recommend to take a look at the second link
especially mine own answer, there is nice example of screwed constructor there
Another possible cause
if you are mixing window message code with threads
and accessing visual system calls or objects within threads instead of window code
that can screw up something in the OS and create unexpected crashes ...
at least on windows

glutMainLoopEvent function cause memory leak

I have a main function in a single main.cpp. Basically, it first call the update function to update command for rendering and then calling the rendering function to render the scene. The rendering function is in another single cpp files.
In order to prevent glutMainLoop() function from blocking updating command in the main function, I use glutMainLoopEvent() in freeglut package as instead.
In my rendering function, the code
glmDraw(Model, GLM_SMOOTH|GLM_TEXTURE|GLM_MATERIAL);
is used to render the scene. If I use glutMainLoop(), this code above will be only executed only once in rendering function. However, when I use glutMainLoopEvent() function, this code will be executed again and again and cause the memory leak problem.
Any suggestion for correcting it?
Memory leak would be somewhere in your code. Double check that all memory you have allocated is getting deallocated properly in your render function. Its your code getting called again and again and leaking memory not glut.

glGenLists(1) return 0 outside OnPaint() with wxThread

Currently, I am trying to separate the display list from the OnPaint(), but glGenLists(1) return 0.
Is there any prerequisite on using display list?
Is function glGenLists(1) only survive inside OnXxx() event thread?
Thank you!
The only requirement is having a valid OpenGL context made current. You probably don't have one. If you use multiple threads, you need to use multiple GL contexts which share objects.
From what I understand, OpenGL can be used across multiple threads (with some caveats), but you should avoid doing so when possible. glGenLists is probably failing because, as mentioned, you are calling it in a different thread than the one you used to create your OpenGL context. If you can, I would suggest moving something other than OpenGL calls to the second thread.
OpenGL and threads do not mix. If you really needs threads, call OpenGL functions only in one threads.
As already said, the glGenLists returns 0 on errors. Check the error with glGetError function.