Waiting for GLContext to be released - opengl

Was passed a set of rendering library that is coded with OSG library and run on Window Environment.
In my program, the renderer exists as a member object in my base class in C++. In my class initiation function, I would do all the neccessary steps to initialize the renderer and use the function this renderer class provide accordingly.
However, I have tried to delete my base class, I presumed the renderer member object would be destroyed along with it. However, when I created another instance of the class, the program would crash when I try to access the rendering function within the renderer.
Have enquired about some opinions on this matter and was told that in Windows, upon deleting the class, the renderer would need to release its glContext and this might be indeterminant time in Windows environment pending upon hardware setup
Is this so? If so, what steps could I take beside amending the rendering source code(if I could get it) to resolve the issue?
Thanks

Actually not deleting / releasing the OpenGL context will just create some memory leak but nothing more. Leaving the OpenGL context around should not cause a crash. In fact crashes like yours are often the cause of releasing some object, that's still required by some other part of the program, so not releasing stuff should not be a cause for a crash like yours.

Your issue is looking more like screwed constructor/destructor or operator= then a GL issue.
its just a gues without the actual code to see/test
Most likely you are accessing already deleted pointer somewhere
check all dynamic member variables and pointers inside your class
Had similar problems in the past so check these
trace back pointers in C++
bds 2006 C hidden memory manager conflicts (class new / delete[] vs. AnsiString)
I recommend to take a look at the second link
especially mine own answer, there is nice example of screwed constructor there
Another possible cause
if you are mixing window message code with threads
and accessing visual system calls or objects within threads instead of window code
that can screw up something in the OS and create unexpected crashes ...
at least on windows

Related

Why is this QObject subclass created in a different thread than its parent?

I'm running into some Qt threading issues that I don't understand, running Qt5.6 on Windows 7. Everything is installed and built using MSYS2.
I've created a Server object, which manages a Device object on behalf of remote clients. Both are subclasses of QObject. After the Server receives a client request to create a device, the Device is constructed with the Server as its parent.
But when I run the application in GDB, calling this->parent() for the Device returns a NULL pointer, indicating that this object has no parent. Furthermore, this->thread() returns a different thread from the Server object's thread. And the requisite "Cannot create children for a parent that is in a different thread" warning is printed, somewhere in the QObject constructor itself.
This is despite the Device object's constructor being passed the Server as its parent and there being no other explicitly-created threads. There are no explicitly-created QThread objects, and I'm only using the default event loop created by QCoreApplication::exec(). Furthermore, compiling and running this on either Linux or OS X confirms that the parent is set correctly and that both objects live in the same thread.
I understand why a call like setParent() or its moral equivalent will fail, to keep objects and their corresponding event loops in the same thread. But it appears that the thread affinity for the Device object is somehow already set by the time the constructor is called. But why is there more than one thread in the first place? And why only on Windows? And why would the object be created in a seemingly random thread, when it could very well use the thread of the parent that I'm passing in the constructor?
EDIT:
A couple of other things to note. I am not only not calling moveToThread() or anything like that, but there's no concurrency of any kind. This includes things like QRunnable or anything from the QtConcurrent module.
Another thing that actually solves the practical problem, but that I don't understand, is that this goes away if I compile for release. I've been using the debug build during development, which is apparently the only time the strange thread/parent behavior manifests. When I run the release build, everything works swimmingly. Is it possible this is a bug in the MSYS2 build of GDB that I'm using? Maybe there's some way in which Qt is not playing nicely with GDB (or vice versa)?
EDIT 2:
Based on a comment from #Ben, I dug around in more depth in the QObject constructor code (found here which starts on line 821. On line 826, ultimately the function QThreadData::current() is called, and this is where the threads of the parent and child first seem to differ. The Qt internal class QThreadData's static function current() (found here) seems to query thread-local storage to return the current thread's index, I think, which appears different from that of the parent's thread. I don't know enough about the Windows threading libraries to know if this is really what's going on, but after reading the pthread-based version of the code, that's my best guess.
I'm pretty far down the rabbit hole at this point, but it seems like my amended question is: Why does QThreadData::current() return a different thread from the parent? Or why has the thread-local storage containing this data changed between the time the parent was constructed and the child's constructor is called?

Access violation when using boost::signals2 and unloading DLLs

I'm trying to figure out this problem.
Suppose, you have a code that uses boost::signals2 for communicating between objects. Lets call them "colorscales". Code for these colorscales is usually situated in the same DLL as the code that uses them. Let's call it main.dll
But sometimes code from other DLLs needs to use these objects and this is where the problems begin.
Basically, the application is pretty big and most of the DLLs are loaded to do some work and then they are unloaded. This is not the case with DLL that contain colorscales code, it's neved unloaded during application normal runtime.
So, when one of the DLLs is loaded (lets call it tools.dll) and some code runs, it may want to use these colorscale objects and communicate with them, so I connect to the signals these objects provide.
The problem is that boost is pretty lazy and all clever, and when you disconnect() slots, it doesn't actually erase connection and stuff that is associated with it (like boost::bind object and such). It just sets a flag that this connection is now disconnected and cleans it up on later (actually, it clean up 2 of these objects when you connect new slots and 1 of them when you invoke signal as of version 1.57). You probably already see where this is coming to.
So, you when you don't need more tools, you disconnect these signals and then application unloads tools.dll.
Then at a later stage, some code executes from the main.dll which causes one of colorscale signals invoked. boost::signals2 goes to invoke it, but before it tries to clean up one disconnected slot. This is where access violation happens, because internally connection had a shared_state object or something like this, that tries to clean itself up in a thread-safe way. But it faces the problem, that the code that it tries to call is already not there, because DLL is unloaded, so the Access Violation exception is thrown.
I've tried to fix this by invoking signal with some dummy parameters before DLL is unloaded and also by connecting and then disconnecting more slots (this one was a stupid idea, because it doesn't solve problem, but just multiplies it) some predefined amount of times (2 or 3 times more than there are slots at all).
It worked, or I thought so, because now it doesn't crash instantly, but rather crashes the next time you load the same tools.dll. I still need to figure out where and why does it crash, but it's somewhere else inside boost.
So, I wanted to ask, what are my options of fixing it?
My thoughts were
Implementing my own connection that works in a more simple way
Providing a more simple way to communicate, like, callbacks, for instance
Finding a workaround for boost being so lazy and smart.
Well, it seems that I've found the cause of the crash after the fix.
So, basically, what happens, when you use the workaround described above (calling signal with dummy parameters multiple times), what it does is that it replaces _shared_state object that was created from boost code from main.dll by another _shared_state object that is created from boost code from tools.dll. This object maintains pointer to reference counter (of type derived from boost::detail::sp_counter_base) inside.
Then the tools.dll unloads and the object remains, but its virtual table is pointing to the code that is no longer there. Let's look at the virtual table of the reference counter to understand what's going on.
[0] 0x000007fed8a42fe5 tools.dll!boost::detail::sp_counted_impl_p<...>::`vector deleting destructor'(unsigned int)
[1] 0x000007fed8a4181b tools.dll!boost::detail::sp_counted_impl_p<...>::dispose(void)
[2] 0x000007fed8a4458e tools.dll!boost::detail::sp_counted_base::destroy(void)
[3] 0x000007fed8a43c42 SegyTools.dll!boost::detail::sp_counted_impl_p<...>::get_deleter(class type_info const &)
[4] 0x000007fed8a42da6 tools.dll!boost::detail::sp_counted_impl_p<...>::get_untyped_deleter(void)
As you can see, all these method are connected to the disposal of reference counter, so the problem doesn't arise before you try to do the same trick second time. So, the trick with disconnecting all signals to try to get rid of all the code from tools.dll doesn't work as expected and the next time you try to do the trick, Access Violation occurs.

Why calling GC.Collect is speeding things up

We have a C++ library (No MFC, No ATL) that provides some core functionality to our .NET application. The C++ DLL is used by SWIG to generate a C# assembly that can be used to access its classes/methods using PInvoke. This C# assembly is used in our .NET application to use the functionality inside C++ DLL.
The problem is related to memory leaks. In our .NET application, I have a loop in my .NET code that creates thousands of instances of a particular class from the C++ DLL. The loop keeps slowing down as it creates instances but if I call GC.Collect() inside the loop (which I know is not recommended), the processing becomes faster. Why is this? Calling Dispose() on the types does not have any affect on the speed. I was expecting a decrease in program speed on using GC.Collect() but it's just the opposite.
Each class that SWIG generates has a ~destructor which calls Dispose(). Every Dispose method has a lock(this) around the statements that make calls to dispose unmanaged memory. Finally it calls GC.SuppressFinalize. We also see AccessViolationException sporadically in Release builds. Any help will be appreciated.
Some types of object can clean up after themselves (via Finalize) if they are abandoned without being disposed, but can be costly to keep around until the finalizer gets around to them; one example within the Framework is the enumerator returned by Microsoft.VisualBasic.Collection.GetEnumerator(). Each call to GetEnumerator() will attach an object wrapped by the new enumerator object to various private update events managed by the collection' when the enumerator is Disposed, it will unsubscribe its events. If GetEnumerator() is called many times (e.g. many thousands or millions) without the enumerators being disposed and without an intervening garbage collection, the collection will get slower and slower (hundreds or thousands of times slower than normal) as the event subscription list keeps growing. Once a garbage-collection occurs, however, any abandoned enumerators' Finalize methods will clean up their subscriptions, and things will start working decently again.
I know you've said that you're calling Dispose, but I have a suspicion that something is creating an IDisposable object instance and not calling Dispose on it. If IDisposable class Foo creates and owns an instance of IDisposable class Bar, but Foo doesn't Dispose that instance within its own Dispose implementation, calling Dispose on an instance of Foo won't clean up the Bar. Once the instance of Foo is abandoned, whether or not it has been Disposed, its Bar will end up being abandoned without disposal.
Do you calling GC.SupressFinalize in your Dispose method? Anyway, there is nice MSDN article that explains, how to write GC friendly code - garbage collection basics and performance hints. Maybe it will be useful.

C++ Ref Count Changes While Destructing Object

I've got a private ref count inside the class SharedObject. SharedObject is a base class for other classes, for example Window. Window is the base class of Editor.
When the ref count reaches 0, because of calling SharedObject::Release(), the SharedObject deletes itself. First we get to the Editor destructor, which shows that the this pointer contains m_refs == 0, but when we get to the Window destructor it is suddenly 1, and when we reach the SharedObject destructor, it is still 1.
I put a breakpoint on the SharedObject::IncRef() method, and it was never called while this happened.
What the?
Build with optimizations off, and set a memory breakpoint on your m_refs.
Seems like you have memory leak somewhere, maybe even long before this destruction occurs. I use Alleyoop to find leaks. Can help, won't hurt to have that out of the way.
Do you use multiple threads? Maybe it's due to some raw pointer somewhere being grabbed by other thread during destruction.
On a side note, I recommend using boost::intrusive_ptr - very convinient pattern for handling addrefs and releases in shared objects that helps to be consequent with it, but this probably won't solve your problem unless you have a real mess in your code ;)

What could cause SIGSEGV when calling NewObjectArray for JNI in Android?

I just started working with the Android NDK but I keep getting SIGSEGV when I have this call in my C code:
jobjectArray someStringArray;
someStringArray = (*env)->NewObjectArray(env, 10,
(*env)->FindClass(env,"java/lang/String"),(*env)->NewStringUTF(env, ""));
Base on all the example I can find, the above code is correct but I keep getting SIGSERGV and everything is ok if the NewObjectArray line is commented out. Any idea what could cause such a problem?
that looks right, so i'm guessing you've done something else wrong. i assume you're running with checkjni on? you might want to break that up into multiple lines: do the FindClass and check the return value, do the NewStringUTF and check the return value, and then call NewObjectArray.
btw, you might want to pass NULL as the final argument; this pattern of using the empty string as the default value for each element of the array is commonly used (i think it's copy & pasted from some Sun documentation and has spread from there) but it's rarely useful, and it's slightly wasteful. (and it doesn't match the behavior of "new String[10]" in Java.)
I guess one of the possible causes is that in a long-run JNI method, the VM aborts when running out of the per-method-invocation local reference slots (normally 512 slots in Android).
Since FindClass() and NewStringUTF() functions would allocate local references, if you stay in a JNI method for a long time, the VM do not know whether a specific local reference should be recycled or not. So you should explicitly call DeleteLocalRef() to release the acquired local references when not required anymore. If you don't do this, the "zombie" local references will occupy slots in VM, and the VM aborts while running out of all the local reference slots.
In short-run JNI method, this may not be a problem due to all the local references would be recycled when exiting from a JNI method.