Here is my simple code.
#include <iostream>
#include <GLFW/glfw3.h>
int main() {
int count;
GLFWmonitor** monitors = glfwGetMonitors(&count);
std::cout << count << std::endl;
return 0;
}
For some reason it keeps telling me that there is zeros monitors. I assume 0 means that there really is 1. But I have two other monitors attached to my computer. When I go into system preferences I can clearly see the other two monitors. But i don't know why it keeps telling me zero. I have no clue what the issue would be.
I'm guessing you need to call glfwInit() before you do anything else.
From the glfw documentation:
int glfwInit (void)
This function initializes the GLFW library. Before most GLFW functions
can be used, GLFW must be initialized, and before a program terminates
GLFW should be terminated in order to free any resources allocated
during or after initialization.
If this function fails, it calls glfwTerminate before returning. If it
succeeds, you should call glfwTerminate before the program exits.
Additional calls to this function after successful initialization but
before termination will succeed but will do nothing.
Returns
GL_TRUE if successful, or GL_FALSE if an error occurred.
Related
I have a question which is quite general, but I hope someone will be able to at least point me in the right direction.
I created my project a I was building it only in Debug mode with /MDd flag.
But it started to have perfomance issues, so I wanted to try it in Release mode to see, how it goes.
Problem is, that when I use /MD or /MT flag and Release mode my application instantly crashes.
So I tried to find out why. It works fine in Debug. I've tried some code changes, but nothing helped. So I decided to make my app just start and comment out rest of my code. But it was still crashing. Even when my code was unused. It didn't crash only, when I completly removed those unused parts of code.
I think it's something with variable inicialization/declaration, but I'm not quite sure what I should look for.
Could someone suggest me what can cause application to crash even if it's just Declaration/Inicialization and is not even used in RunTime?
I hope you can somehow understand what is my problem.
Thanks for any suggestions!
EDIT: Code which crashes, when unused code is in project, but does not crash when i remove unused code.
#include "core/oxygine.h"
#include "Stage.h"
#include "DebugActor.h"
//#include "Galatex.h"
using namespace oxygine;
//called each frame
int mainloop()
{
//galatex_update();
//update our stage
//update all actors. Actor::update would be called also for all children
getStage()->update();
if (core::beginRendering())
{
Color clearColor(32, 32, 32, 255);
Rect viewport(Point(0, 0), core::getDisplaySize());
//render all actors. Actor::render would be called also for all children
getStage()->render(clearColor, viewport);
core::swapDisplayBuffers();
}
//update internal components
//all input events would be passed to Stage::instance.handleEvent
//if done is true then User requests quit from app.
bool done = core::update();
return done ? 1 : 0;
}
//it is application entry point
void run()
{
ObjectBase::__startTracingLeaks();
//initialize Oxygine's internal stuff
core::init_desc desc;
#if OXYGINE_SDL || OXYGINE_EMSCRIPTEN
//we could setup initial window size on SDL builds
desc.w = 1800;
desc.h = 1000;
//marmalade settings could be changed from emulator's menu
#endif
//galatex_preinit();
core::init(&desc);
//create Stage. Stage is a root node
Stage::instance = new Stage(true);
Point size = core::getDisplaySize();
getStage()->setSize(size);
//DebugActor is a helper actor node. It shows FPS, memory usage and other useful stuff
DebugActor::show();
//initialize this example stuff. see example.cpp
//galatex_init();
#ifdef EMSCRIPTEN
/*
if you build for Emscripten mainloop would be called automatically outside.
see emscripten_set_main_loop below
*/
return;
#endif
//here is main game loop
while (1)
{
int done = mainloop();
if (done)
break;
}
//user wants to leave application...
//lets dump all created objects into log
//all created and not freed resources would be displayed
ObjectBase::dumpCreatedObjects();
//lets cleanup everything right now and call ObjectBase::dumpObjects() again
//we need to free all allocated resources and delete all created actors
//all actors/sprites are smart pointer objects and actually you don't need it remove them by hands
//but now we want delete it by hands
//check example.cpp
//galatex_destroy();
//renderer.cleanup();
/**releases all internal components and Stage*/
core::release();
//dump list should be empty now
//we deleted everything and could be sure that there aren't any memory leaks
ObjectBase::dumpCreatedObjects();
ObjectBase::__stopTracingLeaks();
//end
}
#ifdef __S3E__
int main(int argc, char* argv[])
{
run();
return 0;
}
#endif
#ifdef OXYGINE_SDL
#include "SDL_main.h"
extern "C"
{
int main(int argc, char* argv[])
{
run();
return 0;
}
};
#endif
#ifdef EMSCRIPTEN
#include <emscripten.h>
void one() { mainloop(); }
int main(int argc, char* argv[])
{
run();
emscripten_set_main_loop(one, 0, 0);
return 0;
}
#endif
So I'll write it here for possibly other newbies like me which would find themselves in similar sutiation.
My problem was in Initialization of static and other variables which were "outside of function". For example:
MyObject object = new MyObject(); //This was the reason, why it was crashing, just had to move
// initialization of such variables to function which was called with object creation.
void MyClass::myFunction(){
object->doSomething();
}
So when program started inicialization of those variables caused crash of program.
Note: It seems like it was problem with objects, cause variables like Integers or such were just fine.
Well, I'm not totally sure why this is allowed in Debug mode, but crashes Release mode right after start, maybe someone could answer under this comment and explain this behavior, I'm just begginer and I'm doing lot of bad stuff, but I'm trying and that's good, right? :D
I hope i didn't waste too much of your time guys and maybe this post will be useful to someone in future.
I am trying to make a server(multithreading) and I run into a problem: it is filling up memory. So I decided to do a simple test. Here is the code in main:
int main(void)
{
int x;
while(1)
{
cin>>x;
uintptr_t thread = 0;
//handle(NULL);
thread = _beginthread(handle, 0, NULL);
if (thread == -1) {
fprintf(stderr, "Couldn't create thread: %d\n", GetLastError());
}
}
}
And here is the 'handle' function:
void handle(void *)
{
;
}
I open task manager, and I am looking there to see how much RAM my process takes.
If the function main is as you see right now, after each press of key 1 and then press enter(so the thing inside the while will execute), the RAM that the process takes increases with 4k(basically, each time the thread is created or something like that, it will leak 4k of memory). If I do this multiple times, it will keep increasing, each time with 4k.
If in the function main I comment this 'thread = _beginthread(handle, 0, 0);' and uncomment this '//handle(NULL);', then the process will not increase it's RAM memory.
Anyone have any ideas how to free that 4k of memory?
I am compiling it with codeblocks, but same result is compiling it with visual studio.
EDIT: from MSDN: "When the thread returns from that routine, it is terminated automatically."
Also I put '_endthread();' in my handle function, but the result IS THE SAME!
Each time around the loop this program creates a new thread. The program never closes any threads.
I think what you have demonstrated is that the memory cost of creating a thread is around 4K.
Presuming you don't want an ever-increasing number of threads, either you should close one before creating another or at least give up when you've got enough.
On further reflection, the above is wrong. I tried your program, and it will not and cannot do what you say, unless there is some important part of the story you've left out.
The line with "cin" just blocks. I pressed enter a few times, but nothing interesting happened. So I took it out.
This program does not leak. Each thread terminates when the handle function finishes.
Here is the code I wrote, adapting yours.
#include <iostream>
#include <Windows.h>
#include <process.h>
using namespace std;
int nthread = 0;
void handle(void *) {
nthread++;
}
int main(int argc, char* argv[]) {
while(nthread < 50000) {
cout << nthread << ' ';
uintptr_t thread = 0;
thread = _beginthread(handle, 0, NULL);
if (thread == -1) {
fprintf(stderr, "Couldn't create thread: %d\n", GetLastError());
break;
}
}
}
It runs 50,000 iterations and uses a grand total of less than 1MB of memory. Exactly as expected.
Something doesn't add up.
Every thread need some memory for it's own infrastructure, that's what the 4K is. When the thread terminates (this depends on your implementation), this 4K will be freed. You should use API functions for joining the the child threads, therefore you should keep the handle(s). Calling the handle function directly is just a function call, no memory is allocated in this case.
EDIT:
Your "handle" function terminates immediately. As far as I know (at least for posix/linux) there are options at creation time for auto-free the memory, or otherwise joining is required. The one thread you see is the "main" thread of the process itself. This way your programm is producing memory leaks.
I am writing a program which executes tcl scripts. When the script has exit command, the program crashes with this error
DeleteInterpProc called with active evals
Aborted
I am calling Tcl_EvalFile(m_interpreter, script.c_str()) where script is the file name.
Also I have tried Tcl_Eval with arguments interpreter and "source filename". Result is the same. Other tcl comands (eg. puts) interpreter executes normally. How this can be fixed?
#include <tcl.h>
#include <iostream>
int main() {
Tcl_Interp *interp = Tcl_CreateInterp();
//Tcl_Preserve(interp);
Tcl_Eval (interp, "exit");
//Tcl_Release(interp);
std::cout << "11111111111" << std::endl;
return 0;
}
This is the simple case. "11111111111" are not printed. As I understand whole program is exited when calling Tcl_Eval (interp, "exit");. The result is same after adding Tcl_Preserve and Tcl_Release.
The problem is that the interpreter, the execution context for Tcl code, is getting its feet deleted out from under itself; this makes it very confused! At least you're getting a clean panic/abort rather than a disgusting hard-to-reproduce crash.
The easiest fix is probably to do:
Tcl_Preserve(m_interpreter);
// Your code that calls Tcl_EvalFile(m_interpreter, script.c_str())
// and deals with the results.
Tcl_Release(m_interpreter);
Be aware that after the Tcl_Release, the Tcl_Interp handle may refer to deleted memory.
(Yes, wrapping the Tcl_Preserve/Tcl_Release in RAII goodness is reasonable.)
If you want instead to permit your code to run after the script does an exit, you have to take additional steps. In particular, the standard Tcl exit command is not designed to cause a return to the calling context: it will cause the process to call the _exit(2) system call. To change it's behavior, replace it:
// A callback function that implements the replacement
static int
MyReplacementExit(ClientData unused, Tcl_Interp *interp, int argc, const char *argv[])
{
// We ought to check the argument count... but why bother?
Tcl_DeleteInterp(interp);
return TCL_OK;
}
int main() {
Tcl_Interp *interp = Tcl_CreateInterp();
// Install that function over the standard [exit]
Tcl_CreateCommand(interp, "exit", MyReplacementExit, NULL, NULL);
// Important; need to keep the *handle* live until we're finished
Tcl_Preserve(interp);
// Or run whatever code you want here...
Tcl_Eval(interp, "exit");
// Important piece of cleanup code
if (!Tcl_InterpDeleted(interp))
Tcl_DeleteInterp(interp);
Tcl_Release(interp);
// After this point, you *MUST NOT* use interp
std::cout << "11111111111" << std::endl;
return 0;
}
The rules for doing memory management in these sorts of scenarios are laid out in the manual page for Tcl_CreateInterp. (That's the 8.6 manual page, but the relevant rules have been true since at least Tcl 7.0, which is over 2 decades ago.) Once an interpreter is deleted, you can no longer count on executing any commands or accessing any variables in it; the Tcl library handles the state unwinding for you.
It might be better to replace (hide) the exit command and create your own exit command that exit your program gracefully. I'm not that good with C and the Tcl C Api, but I hope this can help you.
Eggdrop for example uses the die command to exit gracefully.
I'm attempting to run a part of my program in a thread and getting an unusual result.
I have updated this question with the results of the changes suggested by Remus, but as I am still getting an error, I feel the question is still open.
I have implemented functionality in a dll to tie into a piece of vendor software. Everything works until I attempt to create a thread inside this dll.
Here is the relevant section of the DLL:
extern "C" {
__declspec(dllexport) void __cdecl ccEntryOnEvent(WORD event);
}
to define the function the vendor's software calls, then:
using namespace std;
HANDLE LEETT_Thread = NULL;
static bool run_LEETT = true;
unsigned threadID;
void *lpParam;
int RunLEETTThread ( void ) {
LEETT_Thread = (HANDLE)_beginthreadex( NULL, 0, LEETT_Main, lpParam, 0 , &threadID );
//LEETT_Thread = CreateThread ( NULL, 0, LEETT_Main, lpParam, 0 , NULL );
if ( LEETT_Thread == NULL )
ErrorExit ( _T("Unable to start translator thread") );
run_LEETT = false; // We only wish to create the thread a single time.
return 0;
}
extern "C" void __cdecl ccEntryOnEvent(WORD event ) {
switch (event) {
case E_START:
if ( run_LEETT ) {
RunLEETTThread ();
MessageText ( "Running LEETT Thread" );
}
break;
}
WaitForSingleObject( LEETT_Thread ,INFINITE);
return;
}
The function is declared as
unsigned __stdcall LEETT_Main ( void* lpParam ) {
LEETT_Main is about 136k when compiled as a stand alone executable with no optimization (I have a separate file with a main() in it that calls the same function as myFunc).
Prior to changing the way the thread is called, the program would crash when declaring a structure containing a std::list, shown here:
struct stateFlags {
bool inComment; // multiline comments bypass parsing, but not line numbering
// Line preconditions
bool MCodeSeen; // only 1 m code per block allowed
bool GCodeSeen; // only 1 g code per block allowed
std::list <int> gotos; // a list of the destination line numbers
};
It now crashes on the _beginthreadex command, tracing through shows this
/*
* Allocate and initialize a per-thread data structure for the to-
* be-created thread.
*/
if ( (ptd = (_ptiddata)_calloc_crt(1, sizeof(struct _tiddata))) == NULL )
goto error_return;
Tracing through this I saw a error 252 (bad ptr) and ultimately 255 (runtime error).
I'm wondering if anyone has encountered this sort of behaviour creating threads (in dlls?) and what the remedy might be. When I create an instance of this structure in my toy program, there was no issue. When I removed the list variable the program simply crashed elsewhere, on the declaration of a string
I'm very open to suggestions at this point, if I have to I'll remove the idea of threading for now, though it's not particularly practical.
Thanks, especially to those reading this over again :)
Threads that use CRT (and std::list implies CRT) need to be created with _beginthreadex, as documented on MSDN:
A thread in an executable that calls the C run-time library (CRT)
should use the _beginthreadex and _endthreadex functions for thread
management rather than CreateThread and ExitThread;
Is not clear how you start your thread, but it appears that you're doing it in DllMain which is not recommended (see Does creating a thread from DllMain deadlock or doesn't it?).
In rechecking the comments here and the configuration of the project, the vendor supplied solution file uses /MTd for debug, but we are building a DLL, so I needed to use /MDd, which immediately compiles and runs correctly.
Sorry about the ridiculous head scratcher...
I'm writing a memory tracking system and the only problem I've actually run into is that when the application exits, any static/global classes that didn't allocate in their constructor, but are deallocating in their deconstructor are deallocating after my memory tracking stuff has reported the allocated data as a leak.
As far as I can tell, the only way for me to properly solve this would be to either force the placement of the memory tracker's _atexit callback at the head of the stack (so that it is called last) or have it execute after the entire _atexit stack has been unwound. Is it actually possible to implement either of these solutions, or is there another solution that I have overlooked.
Edit:
I'm working on/developing for Windows XP and compiling with VS2005.
I've finally figured out how to do this under Windows/Visual Studio. Looking through the crt startup function again (specifically where it calls the initializers for globals), I noticed that it was simply running "function pointers" that were contained between certain segments. So with just a little bit of knowledge on how the linker works, I came up with this:
#include <iostream>
using std::cout;
using std::endl;
// Typedef for the function pointer
typedef void (*_PVFV)(void);
// Our various functions/classes that are going to log the application startup/exit
struct TestClass
{
int m_instanceID;
TestClass(int instanceID) : m_instanceID(instanceID) { cout << " Creating TestClass: " << m_instanceID << endl; }
~TestClass() {cout << " Destroying TestClass: " << m_instanceID << endl; }
};
static int InitInt(const char *ptr) { cout << " Initializing Variable: " << ptr << endl; return 42; }
static void LastOnExitFunc() { puts("Called " __FUNCTION__ "();"); }
static void CInit() { puts("Called " __FUNCTION__ "();"); atexit(&LastOnExitFunc); }
static void CppInit() { puts("Called " __FUNCTION__ "();"); }
// our variables to be intialized
extern "C" { static int testCVar1 = InitInt("testCVar1"); }
static TestClass testClassInstance1(1);
static int testCppVar1 = InitInt("testCppVar1");
// Define where our segment names
#define SEGMENT_C_INIT ".CRT$XIM"
#define SEGMENT_CPP_INIT ".CRT$XCM"
// Build our various function tables and insert them into the correct segments.
#pragma data_seg(SEGMENT_C_INIT)
#pragma data_seg(SEGMENT_CPP_INIT)
#pragma data_seg() // Switch back to the default segment
// Call create our call function pointer arrays and place them in the segments created above
#define SEG_ALLOCATE(SEGMENT) __declspec(allocate(SEGMENT))
SEG_ALLOCATE(SEGMENT_C_INIT) _PVFV c_init_funcs[] = { &CInit };
SEG_ALLOCATE(SEGMENT_CPP_INIT) _PVFV cpp_init_funcs[] = { &CppInit };
// Some more variables just to show that declaration order isn't affecting anything
extern "C" { static int testCVar2 = InitInt("testCVar2"); }
static TestClass testClassInstance2(2);
static int testCppVar2 = InitInt("testCppVar2");
// Main function which prints itself just so we can see where the app actually enters
void main()
{
cout << " Entered Main()!" << endl;
}
which outputs:
Called CInit();
Called CppInit();
Initializing Variable: testCVar1
Creating TestClass: 1
Initializing Variable: testCppVar1
Initializing Variable: testCVar2
Creating TestClass: 2
Initializing Variable: testCppVar2
Entered Main()!
Destroying TestClass: 2
Destroying TestClass: 1
Called LastOnExitFunc();
This works due to the way MS have written their runtime library. Basically, they've setup the following variables in the data segments:
(although this info is copyright I believe this is fair use as it doesn't devalue the original and IS only here for reference)
extern _CRTALLOC(".CRT$XIA") _PIFV __xi_a[];
extern _CRTALLOC(".CRT$XIZ") _PIFV __xi_z[]; /* C initializers */
extern _CRTALLOC(".CRT$XCA") _PVFV __xc_a[];
extern _CRTALLOC(".CRT$XCZ") _PVFV __xc_z[]; /* C++ initializers */
extern _CRTALLOC(".CRT$XPA") _PVFV __xp_a[];
extern _CRTALLOC(".CRT$XPZ") _PVFV __xp_z[]; /* C pre-terminators */
extern _CRTALLOC(".CRT$XTA") _PVFV __xt_a[];
extern _CRTALLOC(".CRT$XTZ") _PVFV __xt_z[]; /* C terminators */
On initialization, the program simply iterates from '__xN_a' to '__xN_z' (where N is {i,c,p,t}) and calls any non null pointers it finds. If we just insert our own segment in between the segments '.CRT$XnA' and '.CRT$XnZ' (where, once again n is {I,C,P,T}), it will be called along with everything else that normally gets called.
The linker simply joins up the segments in alphabetical order. This makes it extremely simple to select when our functions should be called. If you have a look in defsects.inc (found under $(VS_DIR)\VC\crt\src\) you can see that MS have placed all the "user" initialization functions (that is, the ones that initialize globals in your code) in segments ending with 'U'. This means that we just need to place our initializers in a segment earlier than 'U' and they will be called before any other initializers.
You must be really careful not to use any functionality that isn't initialized until after your selected placement of the function pointers (frankly, I'd recommend you just use .CRT$XCT that way its only your code that hasn't been initialized. I'm not sure what will happen if you've linked with standard 'C' code, you may have to place it in the .CRT$XIT block in that case).
One thing I did discover was that the "pre-terminators" and "terminators" aren't actually stored in the executable if you link against the DLL versions of the runtime library. Due to this, you can't really use them as a general solution. Instead, the way I made it run my specific function as the last "user" function was to simply call atexit() within the 'C initializers', this way, no other function could have been added to the stack (which will be called in the reverse order to which functions are added and is how global/static deconstructors are all called).
Just one final (obvious) note, this is written with Microsoft's runtime library in mind. It may work similar on other platforms/compilers (hopefully you'll be able to get away with just changing the segment names to whatever they use, IF they use the same scheme) but don't count on it.
atexit is processed by the C/C++ runtime (CRT). It runs after main() has already returned. Probably the best way to do this is to replace the standard CRT with your own.
On Windows tlibc is probably a great place to start: http://www.codeproject.com/KB/library/tlibc.aspx
Look at the code sample for mainCRTStartup and just run your code after the call to _doexit();
but before ExitProcess.
Alternatively, you could just get notified when ExitProcess gets called. When ExitProcess gets called the following occurs (according to http://msdn.microsoft.com/en-us/library/ms682658%28VS.85%29.aspx):
All of the threads in the process, except the calling thread, terminate their execution without receiving a DLL_THREAD_DETACH notification.
The states of all of the threads terminated in step 1 become signaled.
The entry-point functions of all loaded dynamic-link libraries (DLLs) are called with DLL_PROCESS_DETACH.
After all attached DLLs have executed any process termination code, the ExitProcess function terminates the current process, including the calling thread.
The state of the calling thread becomes signaled.
All of the object handles opened by the process are closed.
The termination status of the process changes from STILL_ACTIVE to the exit value of the process.
The state of the process object becomes signaled, satisfying any threads that had been waiting for the process to terminate.
So, one method would be to create a DLL and have that DLL attach to the process. It will get notified when the process exits, which should be after atexit has been processed.
Obviously, this is all rather hackish, proceed carefully.
This is dependent on the development platform. For example, Borland C++ has a #pragma which could be used for exactly this. (From Borland C++ 5.0, c. 1995)
#pragma startup function-name [priority]
#pragma exit function-name [priority]
These two pragmas allow the program to specify function(s) that should be called either upon program startup (before the main function is called), or program exit (just before the program terminates through _exit).
The specified function-name must be a previously declared function as:
void function-name(void);
The optional priority should be in the range 64 to 255, with highest priority at 0; default is 100. Functions with higher priorities are called first at startup and last at exit. Priorities from 0 to 63 are used by the C libraries, and should not be used by the user.
Perhaps your C compiler has a similar facility?
I've read multiple times you can't guarantee the construction order of global variables (cite). I'd think it is pretty safe to infer from this that destructor execution order is also not guaranteed.
Therefore if your memory tracking object is global, you will almost certainly be unable any guarantees that your memory tracker object will get destructed last (or constructed first). If it's not destructed last, and other allocations are outstanding, then yes it will notice the leaks you mention.
Also, what platform is this _atexit function defined for?
Having the memory tracker's cleanup executed last is the best solution. The easiest way I've found to do that is to explicitly control all the relevant global variables' initialization order. (Some libraries hide their global state in fancy classes or otherwise, thinking they're following a pattern, but all they do is prevent this kind of flexibility.)
Example main.cpp:
#include "global_init.inc"
int main() {
// do very little work; all initialization, main-specific stuff
// then call your application's mainloop
}
Where the global-initialization file includes object definitions and #includes similar non-header files. Order the objects in this file in the order you want them constructed, and they'll be destructed in the reverse order. 18.3/8 in C++03 guarantees that destruction order mirrors construction: "Non-local objects with static storage duration are destroyed in the reverse order of the completion of their constructor." (That section is talking about exit(), but a return from main is the same, see 3.6.1/5.)
As a bonus, you're guaranteed that all globals (in that file) are initialized before entering main. (Something not guaranteed in the standard, but allowed if implementations choose.)
I've had this exact problem, also writing a memory tracker.
A few things:
Along with destruction, you also need to handle construction. Be prepared for malloc/new to be called BEFORE your memory tracker is constructed (assuming it is written as a class). So you need your class to know whether it has been constructed or destructed yet!
class MemTracker
{
enum State
{
unconstructed = 0, // must be 0 !!!
constructed,
destructed
};
State state;
MemTracker()
{
if (state == unconstructed)
{
// construct...
state = constructed;
}
}
};
static MemTracker memTracker; // all statics are zero-initted by linker
On every allocation that calls into your tracker, construct it!
MemTracker::malloc(...)
{
// force call to constructor, which does nothing after first time
new (this) MemTracker();
...
}
Strange, but true. Anyhow, onto destruction:
~MemTracker()
{
OutputLeaks(file);
state = destructed;
}
So, on destruction, output your results. Yet we know that there will be more calls. What to do? Well,...
MemTracker::free(void * ptr)
{
do_tracking(ptr);
if (state == destructed)
{
// we must getting called late
// so re-output
// Note that this might happen a lot...
OutputLeaks(file); // again!
}
}
And lastly:
be careful with threading
be careful not to call malloc/free/new/delete inside your tracker, or be able to detect the recursion, etc :-)
EDIT:
and I forgot, if you put your tracker in a DLL, you will probably need to LoadLibrary() (or dlopen, etc) yourself to up your reference count, so that you don't get removed from memory prematurely. Because although your class can still be called after destruction, it can't if the code has been unloaded.