What's the proper way to do this?
I'm doing these steps:
Create Shader(s)
Compile Shader(s)
Create Program
Attach Shader(s) to Program
Link Program
Delete Shader(s)
In http://www.opengl.org/wiki/GLSL_Object it says: You do not have to explicitly detach shader objects, even after linking the program. However, it is a good idea to do so once linking is complete, as otherwise the program object will keep its attached shader objects alive when you try to delete them.
And also from Proper way to delete GLSL shader? says it'll increase the memory if I don't delete the shaders.
So checking on http://www.opengl.org/sdk/docs/man/xhtml/glDetachShader.xml, it says If shader has already been flagged for deletion by a call to glDeleteShader and it is not attached to any other program object, it will be deleted after it has been detached.
So my #6 is useless unless I detach it after right?
Should I detach and delete after the Program has been compiled correctly (to save the memory) or should I detach/delete only when my application is closing down?
So my #6 is useless unless I detach it after right?
Yes. What the GL does is basically reference counting. As long as some other object is referencing the shader object, it will stay alive. If you delete the object, the actual deletion will be deferred until the last reference is removed.
Should I detach and delete after the Program has been compiled
correctly (to save the memory) or should I detach/delete only when my
application is closing down?
That is up to you. You can delete it as soon as you don't need it any more. If you do not plan to relink that shader, you can destroy all attached shader objects immediately after the initial link operation. However, shader objects aren't consuming much memory after all (and don't go to the GPU memory, only the final programs will) it is typically not a big deal if you delete them later, or even don't delete them at all, as all the GL resources will be destroyed when the GL context is destroyed (including the case that the application exits). Of course, if you create shaders dynamically at runtime, you should also dynamically delete the old and unused objects to avoid accumulating lots of unused objects and effectively leaking memory/object names and so on.
Related
Was passed a set of rendering library that is coded with OSG library and run on Window Environment.
In my program, the renderer exists as a member object in my base class in C++. In my class initiation function, I would do all the neccessary steps to initialize the renderer and use the function this renderer class provide accordingly.
However, I have tried to delete my base class, I presumed the renderer member object would be destroyed along with it. However, when I created another instance of the class, the program would crash when I try to access the rendering function within the renderer.
Have enquired about some opinions on this matter and was told that in Windows, upon deleting the class, the renderer would need to release its glContext and this might be indeterminant time in Windows environment pending upon hardware setup
Is this so? If so, what steps could I take beside amending the rendering source code(if I could get it) to resolve the issue?
Thanks
Actually not deleting / releasing the OpenGL context will just create some memory leak but nothing more. Leaving the OpenGL context around should not cause a crash. In fact crashes like yours are often the cause of releasing some object, that's still required by some other part of the program, so not releasing stuff should not be a cause for a crash like yours.
Your issue is looking more like screwed constructor/destructor or operator= then a GL issue.
its just a gues without the actual code to see/test
Most likely you are accessing already deleted pointer somewhere
check all dynamic member variables and pointers inside your class
Had similar problems in the past so check these
trace back pointers in C++
bds 2006 C hidden memory manager conflicts (class new / delete[] vs. AnsiString)
I recommend to take a look at the second link
especially mine own answer, there is nice example of screwed constructor there
Another possible cause
if you are mixing window message code with threads
and accessing visual system calls or objects within threads instead of window code
that can screw up something in the OS and create unexpected crashes ...
at least on windows
I created a Vertex Buffer Object class to manage lots of vertices in my application. The user calls the constructor to create a glBuffer and calls glBufferData to allocate a specified amount of space.
There is a class function called resize that allows the user to change the capacity of the VBO by calling again the glBufferData. My question is, how do I deallocate the previous allocation? Or Is it done automatically?
glDeleteBuffer, according to the opengl docs, only deletes the buffer itself with no mention of the actual memory allocated with glBufferData.
Can I keep calling glBufferData on the same bound buffer with no memory leak?
You can't create a memory leak by repeatedly calling glBufferData() for the same buffer object. The new allocation replaces the old one.
There's one subtle aspect that most of the time you don't need to worry about, but may still be useful to understand: There is a possibility of having multiple active allocations for the same buffer object temporarily. This happens due to the asynchronous nature of OpenGL. For illustration, picture a call sequence like this:
glBufferData(dataA)
glDraw()
glBufferData(dataB)
glDraw()
When you make the API call for item 3 in this sequence, the GPU may not yet have finished with the draw call from call 2. It may in fact still be queued up somewhere in the driver, and not handed over to the GPU yet. Since call 2 depends on dataA, that data can not be deleted until the GPU finished executing the draw call 2. In this case, the allocations for dataA and dataB temporarily exist at the same time.
When exactly dataA is actually deleted depends on the implementation. It just can't be earlier than the time where the GPU finishes with draw call 2. After that it could be immediately, based on some garbage collection timer, when memory runs low, or many other options.
glDeleteBuffer() will also delete the buffer memory. Very similarly to the point above, it may not happen immediately. Again, it can only be deleted after the GPU finished executing all pending operations that use the buffer memory.
If you don't plan to use the buffer object anymore, calling glDeleteBuffer() is the best option.
After 10 minutes I read the docs for the glBufferData page.
glBufferData creates a new data store for the buffer object currently bound to
target. Any pre-existing data store is deleted.
,which solves my question. I indeed can keep calling it to increase or decrease the size of my VBO.
glDeleteBuffer delete the buffer handle and the associated resource, if any, should be collected/released soon by the system.
If the buffer is currently binded the driver will unbind it (bind to zero), although it is ugly to delete a binded buffer.
After I have linked my program, may I delete the shaders attached to it?
http://www.opengl.org/sdk/docs/man/xhtml/glDeleteShader.xml
If a shader object to be deleted is attached to a program object, it
will be flagged for deletion, but it will not be deleted until it is
no longer attached to any program object, for any rendering context
(i.e., it must be detached from wherever it was attached before it
will be deleted).
Yes, according to the documentation: https://www.opengl.org/sdk/docs/man/html/glLinkProgram.xhtml
The program object's information log is updated and the program is
generated at the time of the link operation. After the link operation,
applications are free to modify attached shader objects, compile
attached shader objects, detach shader objects, delete shader objects,
and attach additional shader objects. None of these operations affects
the information log or the program that is part of the program object.
Summarizing: "After the link operation, applications are free to ... delete shader objects. None of these operations affects... the program that is part of the program object."
This gives you more options than you might think according to theBuzzSaw's response. In particular, you are free to delete the shader, which as theBuzzSaw says, will not actually delete the shader until it is detached. But after linking, you can also detach the shader, which will allow you to fully delete it --- and the linked program will not be affected.
This early-deletion of the shader is used in this tutorial:
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-2-the-first-triangle/
(although it isn't explained directly there).
I'm using PBOs to asynchronously move data between my cpu and gpu.
When moving from the GPU i know I can delete the source texture after I have called glMapBuffer on the PBO.
However, what about the other way around? When do I know that the transfer from the PBO to the texture (glTexSubImage2D(..., NULL)) is done and I can safely release or re-use the PBO? Is it as soon as I bind the texture or something else?
I think after calling glTexImage you are safe in deleting or reusing the buffer without errors, as the driver handles everything for you, including deferred destruction (that's the advantage of buffer objects). But this means, that calls to glMapBuffer may block until the preceding glTexImage copy has completed. If you want to reuse the buffer and just overwrite its whole content, it is common practice to realocate it with glBufferData before calling glMapBuffer. This way the driver knows you don't care about the previous content anymore and can allocate a new buffer that you can use immediately (the memory containing the previous content is then freed by the driver when it is really not used anymore). Just keep in mind that your buffer object is just a handle to memory, that the driver can manage and copy as it likes.
EDIT: This means in the other way (GPU-CPU) you can delete the source texture after glGetTexImage has returned, as the driver manages everything behind the scenes. The decision of using buffer objects or not should not have any implications on the order and time in which you call GL functions. Keep in mind that calling glDelete... does not immediately delete an object, it just enqueues this command into the GL command stream and even then, its up to the driver when it really frees any memory.
I'm working on a pretty standard Qt mobile app (written in C++, targeted at Symbian devices), and am finding that sometimes when the app is closed (i.e. via a call to QApplication::quit), the final destructor in the app can take a long time to return (30 seconds plus). By this I mean, all clean up operations in the destructor have completed (quickly, all well within a second) and we've reached the point where execution is leaving the destructor and returning to the code that implicitly called it (i.e. when we delete the object).
Obviously at that point I'd expect execution to return to just after the call to delete the object, virtually instantly, but as I say sometimes this is taking an age!
This long closure time happens both in debug and release builds, with logging enabled or disabled, so I don't think that's a factor here. When we reach the end of the destructor I'm pretty certain no file handles are left open, or any other open resources (network connections etc.)...though even if they where surely this wouldn't present itself as a problem on exiting the destructor (?).
This is on deleting the application's QMainWindow object. Currently the call to do this is in a slot connected to QApplication::aboutToQuit, though I've tried deleting that object in the apps "main" function too.
The length of delay we experience seems proportional to the amount of activity in the app before we exit. This sort of makes me think memory leaks may be a problem here, however we're not aware of any (doesn't mean there aren't any of course), and also I've never seen this behaviour before with leaked memory.
Has anyone any ideas what might be going on here?
Cheers
If your final destructor is for a class than inherits QObject then the QObject destructor will be called immediately following the destructor of your final object. Presumably this object is the root of a possibly large object tree which will trigger a number of actions to occur including calling the destructor of all the child QObjects. Since you state that the problem is componded by the amount of activity, there are likely a very large number of children being added to the object tree that are deleted at this time, perhaps more than you intended. Instead of adding all the objects to one giant tree to be deleted all at once. Identify objects that are being created often that don't need to persist through the entire execution. Instead of creating those objects with a parent, start a new tree that can be deleted earlier (parent =0). Look at QObject::deleteLater() which will wait until there is no user interaction to delete the objects in these independant trees.