Good or bad: Calling destructor in constructor [closed] - c++

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Break: I don't think it is the same question actually, the other question is a general question about calling destructors manually. This is at the creating process, inside the class itself. Still want to know what happen when you do this, like stated in the question below.
At first, I think it is bad, real bad. Just analysing this piece of code of a constructor (see below), made by two guys and need it to translate it to Delphi object Pascal. It must behave the same like the C-version. I don't like the style, very ugly but never mind.
Another thing, at two stages in the code it calls the destructor when fail (I suppose to close the connection however the destructor is automaticly called when deleted, why want you do this anyway?). I think that is not the way to do it or do miss something inhere?
Also, after calling the destructor, they want to throw an exception (huh?) however I think this will never be executed and cause another exeption when you manually want to access it or want to delete it.
Serial::Serial(
std::string &commPortName,
int bitRate,
bool testOnStartup,
bool cycleDtrOnStartup
) {
std::wstring com_name_ws = s2ws(commPortName);
commHandle =
CreateFileW(
com_name_ws.c_str(),
GENERIC_READ | GENERIC_WRITE,
0,
NULL,
OPEN_EXISTING,
0,
NULL
);
if(commHandle == INVALID_HANDLE_VALUE)
throw("ERROR: Could not open com port");
else {
// set timeouts
COMMTIMEOUTS timeouts;
/* Blocking:
timeouts.ReadIntervalTimeout = MAXDWORD;
timeouts.ReadTotalTimeoutConstant = 0;
timeouts.ReadTotalTimeoutMultiplier = 0;
Non-blocking:
timeouts = { MAXDWORD, 0, 0, 0, 0}; */
// Non-blocking with short timeouts
timeouts.ReadIntervalTimeout = 1;
timeouts.ReadTotalTimeoutMultiplier = 1;
timeouts.ReadTotalTimeoutConstant = 1;
timeouts.WriteTotalTimeoutMultiplier = 1;
timeouts.WriteTotalTimeoutConstant = 1;
DCB dcb;
if(!SetCommTimeouts(commHandle, &timeouts)) {
Serial::~Serial(); <- Calls destructor!
throw("ERROR: Could not set com port time-outs");
}
// set DCB; disabling harware flow control; setting 1N8 mode
memset(&dcb, 0, sizeof(dcb));
dcb.DCBlength = sizeof(dcb);
dcb.BaudRate = bitRate;
dcb.fBinary = 1;
dcb.fDtrControl = DTR_CONTROL_DISABLE;
dcb.fRtsControl = RTS_CONTROL_DISABLE;
dcb.Parity = NOPARITY;
dcb.StopBits = ONESTOPBIT;
dcb.ByteSize = 8;
if(!SetCommState(commHandle, &dcb)) {
Serial::~Serial(); <- Calls destructor!
throw("ERROR: Could not set com port parameters");
}
}
if(cycleDtrOnStartup) {
if(!EscapeCommFunction(commHandle, CLRDTR))
throw("ERROR: clearing DTR");
Sleep(200);
if(!EscapeCommFunction(commHandle, SETDTR))
throw("ERROR: setting DTR");
}
if(testOnStartup) {
DWORD numWritten;
char init[] = "PJON-python init";
if(!WriteFile(commHandle, init, sizeof(init), &numWritten, NULL))
throw("writing initial data to port failed");
if(numWritten != sizeof(init))
throw("ERROR: not all test data written to port");
}
};
Serial::~Serial() {
CloseHandle(commHandle);
};
// and there is more etc .......
// .............
Next question, what will actually happen in memory when executing this code and it calls the destructor? I am not able to execute it and debug it.

This code is ugly but legal. When an exception is thrown from constructor, the corresponding destructor is never called. So calling it manually before throwing is needed to prevent the resource leak. The real bug here is not calling destructor manually in other cases before throwing exception.
Of course, a better way of doing this is having a separate RAII object that encapsulates commHandle. A unique_ptr with custom deleter can serve this role.
Any destructor beyond low-level libraries is a code smell in modern C++.

Well, let's start by saying the obvious: don't write code this way. I can see why they did it - calling the destructor manually was a convenient way to clean up before throwing that exception, but why is it such a bad idea?
Well, the destructor is normally only called if the constructor runs to completion (so it won't be run, in the normal way of things, if the constructor throws) and this is deliberate as it allows the destructor to assume that the object has been fully initialised. A destructor of any complexity that tries to tear down an object which is not fully initialised is likely to run into trouble.
Now none of this matters in the code as written, because all we have here is a tinpot destructor that just closes the handle so, here, the code does correctly clean up before throwing (sometimes, thank you Eugene) and we can all sit down and relax. But as a programming pattern it stinks, and, now that you know what it actually does you should tidy it up when you move it to Delphi.
So, lecture over, a few specifics (in no particular order):
When you call a destructor manually, it's just like calling any other function - it is executed and it returns and life goes on. Specifically, the object itself is not deallocated. Doing this has value when using placement new.
It follows from the above that that call to throw will be executed after the destructor returns (it would be anyway, regardless).
Just to repeat, when a constructor throws, the destructor is not called. The object will be deallocated subsequently however, before the exception is caught (if it ever is), I believe.
If the rest of the code you have to convert is written in such a slapdash manner, I don't envy you. Constructors shouldn't fail anyway, in the general run of things, just open the port in a separate method once the object is up and running.

When you throw from the constructor, it will call the destructor of any object constructed so far: the member variables and the inherited classes (section 15.2/2).
An object of any storage duration whose initialization or destruction is terminated by an exception will have destructors executed for all of its fully constructed subobjects
If you call the destructor manually, their destructor will be also called (section 12.4/8).
After executing the body of the destructor and destroying any automatic objects allocated within the body, a
destructor for class X calls the destructors for X’s direct non-variant non-static data members, the destructors
for X’s direct base classes ...
Therefore the destructor of member variables will be called twice. Formally, calling twice a destructor is undefined behavior. (You may get away with it if they all had empty destructor.)
If you really need a clean solution, wrap the parts that need to be cleaned into a class and make it a member variable. Call the initialisation from it and if you throw, you are guaranteed that it will be cleaned.
You even get kudo points for applying RAII.

Related

How to properly use multiple destructors in UE4

In my game I am creating some big class, which store references to smaller classes, which store some references too. And during gameplay I need to recreate this big class with all its dependencies by destroying it and making new one.
It's creation looks like that:
ABigClass::ABigClass()
{
UE_LOG(LogTemp, Warning, TEXT("BigClass created"));
SmallClass1 = NewObject<ASmallClass>();
SmallClass2 = NewObject<ASmallClass>();
......
}
And it works. Now I want to destroy and re create it by calling from some function
BigClass->~ABigClass();
BigClass= NewObject<ABigClass>();
which destroys BigClass and creates new one with new small classes, the problem is, that old small classes are still in memory, I can see it by logging their destructors.
So I try to make such destructor for Big Class
ABigClass::~ABigClass()
{
SmallClass1->~ASmallClass();
SmallClass2->~ASmallClass();
......
UE_LOG(LogTemp, Warning, TEXT("BigClass destroyed"));
}
SmallClass is inherited from other class, which have its own constructor and destructor, but I do not call it anywhere.
Sometimes it works, but mostly causes UE editor to crash when code compiled, or when game starsted/stopped.
Probably there is some more common way to do what I want to do? Or some validation which will prevent it from crash?
Please help.
Don't manually call a destructor. Replace
SmallClass1->~ASmallClass();
SmallClass2->~ASmallClass();
with either
delete SmallClass1;
SmallClass1 = nullptr;
or by nothing if those type are ref-counted by unreal in some fashion (likely).
Finally I found the way to do this.
First, I needed to get rid of all UPROPERTYs, related to classes, which I am going to destroy, except UPROPERTYs in a class, which controls their creation and destruction. If I need to expose these classes to blueprints somewhere else, it can be done with BlueprintCallable getter and setter functions.
Then I needed to calm down UE's garbage collector, which destroy objects on hot reload and on game shutdown in random order, ignoring my destructor hierarchy, which results in attempt to destroy already destroyed object and crash. So before doing something in destructor with other objects, I need to check if there is something to destroy, adding IsValidLowLevel() check.
And instead of delete keyword it is better to use DestructItem() function, it seems to be more garbage collector-friendly in many ways.
Also, I did not found a way to safely destroy objects, which are spawned in level. Probably because they are referenced somewhere else, but I dont know where, but since they are lowest level in my hierarchy, i can just Destroy() them in the world, and do not bother when exactly they will disappear from memory by garbage collector's will.
Anyway, I ended up with following code:
void AGameModeBase::ResetGame()
{
if (BigClass->IsValidLowLevel()) {
DestructItem(BigClass);
BigClass= nullptr;
BigClass= NewObject<ABigClass>();
UE_LOG(LogTemp, Warning, TEXT("Bigclass recreated"));
}
}
ABigClass::~ABigClass()
{
if (SmallClass1) {
if (SmallClass->IsValidLowLevel())
{
DestructItem(SmallClass1);
}
SmallClass1 = nullptr;
}
if (SmallClass2) {
if (SmallClass->IsValidLowLevel())
{
DestructItem(SmallClass2);
}
SmallClass2 = nullptr;
}
...
UE_LOG(LogTemp, Warning, TEXT("BigClass destroyed"));
}
ASmallClass::~ASmallClass()
{
for (ATinyClass* q : TinyClasses)
{
if (q->IsValidLowLevel())
{
q->Destroy();
}
}
TinyClasses = {};
UE_LOG(LogTemp, Warning, TEXT("SmallClass destroyed"));
}
And no crashes. Probably someone will find it useful in cases it is needed to clear game lever from hierarchy of objects without fully reloading it.

WinInet and InternetOpen

The documentation states that InternetOpen can be called multiple times without any issues. My question though is should I be calling InternetCloseHandle on handle returned by it multiple times?
For example, I have a class I call CAPIRequestContext, which has a handle which is returned by InternetOpen. Each one of my requests has it's own copy. Right now, I call InternetCloseHandle in the destructor, so it gets called multiple times.
I'm wondering if the following could cause issues:
Thread A creates a CAPIRequestObject which calls InternetOpen and stores the handle. Thread B does the same, but then goes out of scope before Thread A exits, so Thread B calls the destructor in it's own CAPIRequestObject, which calls InternetCloseHandle on the handle returned by InternetOpen.
Should I remove the call to InternetCloseHandle in the destructors of my class? At least for the InternetHandle? I assume I should call InternetCloseHandle for the handle returned by HttpOpenRequest.
I have similar questions regarding the handle returned by InternetConnect. Are these handles shared?
Here is some sample code, minus some external code that I don't think is related to the issue:
class CAPIClient;
class CAPIRequest
{
public:
CAPIRequestContext()
{
m_hConn = NULL;
m_hInternet = NULL;
m_hRequest = NULL;
}
~CAPIRequestContext()
{
if (m_hRequest) InternetCloseHandle(m_hRequest);
if (m_hConn) InternetCloseHandle(m_hConn);
if (m_hInternet) InternetCloseHandle(m_hInternet);
}
bool Init(const CAPIClient &client, const std::string &uri, const std::string &method)
{
ATLASSERT(!(m_hInternet || m_hConn || m_hRequest));
if (m_hInternet || m_hConn || m_hRequest) throw std::exception("Cannot init request more than once.");
bool success = false;
m_AuthToken = *client.m_pAuthToken;
URI = uri;
m_hInternet = InternetOpen("MyApp", INTERNET_OPEN_TYPE_PRECONFIG, NULL, NULL, 0);
DWORD requestTimeout = 60 * 1000; // Set timeout to 60 seconds instead of 30
if (m_hInternet)
{
InternetSetOption(m_hInternet, INTERNET_OPTION_RECEIVE_TIMEOUT, &requestTimeout, sizeof(requestTimeout));
m_hConn = InternetConnect(m_hInternet, (LPSTR)client.m_host.c_str(), client.m_port, NULL, NULL, INTERNET_SERVICE_HTTP, 0, (DWORD_PTR)this);
if (m_hConn) {
m_hRequest = HttpOpenRequest(m_hConn, method.c_str(), uri.c_str(), "HTTP/1.1", NULL, NULL, client.GetOpenRequestFlags(), 0);
}
if (m_hRequest)
{
success = true;
}
}
return success;
}
}
// There are additional calls like
// SendRequest
// GetData
// GetFullResponse
private:
// Added these methods to make sure I'm not copying the handles
enter code here
CAPIRequestContext(const CAPIRequestContext &other) = delete;
CAPIRequestContext& operator=(const CAPIRequestContext& other) = delete;
private:
HINTERNET m_hInternet;
HINTERNET m_hConn;
HINTERNET m_hRequest;
}
The documentation states that InternetOpen can be called multiple times without any issues. My question though is should I be calling InternetCloseHandle on handle returned by it multiple times?
Yes, as is stated in InternetOpen() documentation:
After the calling application has finished using the HINTERNET handle returned by InternetOpen, it must be closed using the InternetCloseHandle function.
For example, I have a class I call CAPIRequestContext, which has a handle which is returned by InternetOpen. Each one of my requests has it's own copy. Right now, I call InternetCloseHandle in the destructor, so it gets called multiple times.
That would be a correct implementation, if each instance of the class calls InternetOpen(), or otherwise obtains ownership of a unique HINTERNET.
HOWEVER, do be aware that such a class needs to implement the Rule of Three. Basically, if you have to provide a destructor to release a resource, you also need to provide a copy-constructor and copy-assignment operator as well.
But, you can't call InternetCloseHandle() multiple times on the same HINTERNET, so you can't have multiple CAPIRequestContext using the same HINTERNET and all of them calling InternetCloseHandle()1. So, your copy constructor and copy-assignment operator must either:
take ownership of the HINTERNET from the source CAPIRequestContext that is being copied.
be disabled completely to prevent copying one CAPIRequestContext to another.
In your case, I would opt for #2.
1: You would need a per-instance flag indicating which instance can call it and which ones cannot. But this is not good class design. If you need to share an HINTERNET, you should implement reference counting semantics instead, such as provided by std::shared_ptr.
I'm wondering if the following could cause issues: Thread A creates a CAPIRequestObject which calls InternetOpen and stores the handle. Thread B does the same, but then goes out of scope before Thread A exits, so Thread B calls the destructor in it's own CAPIRequestObject, which calls InternetCloseHandle on the handle returned by InternetOpen.
That is perfectly safe, provided each CAPIRequestObject has its own HINTERNET.
Should I remove the call to InternetCloseHandle in the destructors of my class?
No, if each class instance holds a unique HINTERNET.
I assume I should call InternetCloseHandle for the handle returned by HttpOpenRequest.
Yes, as is stated in the HttpOpenRequest() documentation:
After the calling application has finished using the HINTERNET handle returned by HttpOpenRequest, it must be closed using the InternetCloseHandle function.
I have similar questions regarding the handle returned by InternetConnect. Are these handles shared?
Each HINTERNET must be closed individually.

some problems about c++ try catch [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I have a question about try catch mechanism. like code:
global value default 0;
int thread1()
{
try
{
set global value to 1;
if exception happens
{
jump into catch;
}
set global value to 0;
}
catch
{
......
}
}
int thread2()
{
ASSERT(global value = 0);
}
If I have the fake code as shown, in try block I set a global value to 1 and then an exception happens, in thread2 I have a ASSERT to test if this global value is equal to 0, then thread2 will failed explicitly because in thread1, we jump into the catch block because of an exception. So can anybody give me some explanation ? I don't know how try--catch works to prevent this.
Exceptions are brutal break of the normal flow of execution. The error correcting flow is to find a catch that correspond to the object thrown; this may lead to several function calls reverse traversal. This is why it needs care to use them, you have two flow of execution : the normal one, and the error one.
If the exception thrown is catched by the catch-block directly after, then you just have to set your variable to 0 in the catch-block.
It not, a good solution is RAII (as suggested in comments). RAII is a very simple idea. As you know that every object created on stack at the entry of any block is destroyed when control leave the block, the idea is to build an object that encapsulate something, so that the destructor will be called whatever happens:
class GlobalControl {
public:
GlobalControl() { myglob = 1; } // RAII
~GlobalControl() { myglob = 0; } // RRID
};
... // somewhere else
try {
GlobalControl c; // ctor call init glob to 1
...
} // whatever will happens, leaving this block cause a call to dtor of c
catch (...) {
}
RAII stands for Resource Acquisition Is Initialization, here your resource acquisition is to set your global to 1, and this is made in the initialization part of the object. RAII should be called RAIIRRID, RAII + Resource Releasing Is Destruction (the resource release is to set your global to 0, and this is made in the destructor).

Trouble tracking down a Bus Error/Seg Fault in C++ and Linux

I have a program that processes neural spike data that is broadcast in UDP packets on a local network.
My current program has two threads a UI thread and a worker thread. The worker thread simply listens for data packets, parses them and makes them available to the UI thread for display and processing. My current implementation works just fine. However for a variety of reasons I'm trying to re-write the program in C++ using an Object Oriented approach.
The current working program initialized the 2nd thread with:
pthread_t netThread;
net = NetCom::initUdpRx(host,port);
pthread_create(&netThread, NULL, getNetSpike, (void *)NULL);
Here is the getNetSpike function that is called by the new thread:
void *getNetSpike(void *ptr){
while(true)
{
spike_net_t s;
NetCom::rxSpike(net, &s);
spikeBuff[writeIdx] = s;
writeIdx = incrementIdx(writeIdx);
nSpikes+=1;
totalSpikesRead++;
}
}
Now in my new OO version of the program I setup the 2nd thread in much the same way:
void SpikePlot::initNetworkRxThread(){
pthread_t netThread;
net = NetCom::initUdpRx(host,port);
pthread_create(&netThread, NULL, networkThreadFunc, this);
}
However, because pthead_create takes a pointer to a void function and not a pointer to an object's member method I needed to create this simple function that wraps the SpikePlot.getNetworSpikePacket() method
void *networkThreadFunc(void *ptr){
SpikePlot *sp = reinterpret_cast<SpikePlot *>(ptr);
while(true)
{
sp->getNetworkSpikePacket();
}
}
Which then calls the getNetworkSpikePacket() method:
void SpikePlot::getNetworkSpikePacket(){
spike_net_t s;
NetCom::rxSpike(net, &s);
spikeBuff[writeIdx] = s; // <--- SegFault/BusError occurs on this line
writeIdx = incrementIdx(writeIdx);
nSpikes+=1;
totalSpikesRead++;
}
The code for the two implementations is nearly identical but the 2nd implementation (OO version) crashes with a SegFault or BusError after the first packet that is read. Using printf I've narrowed down which line is causing the error:
spikeBuff[writeIdx] = s;
and for the life of me I can't figure out why its causing my program to crash.
What am I doing wrong here?
Update:
I define spikeBuff as a private member of the class:
class SpikePlot{
private:
static int const MAX_SPIKE_BUFF_SIZE = 50;
spike_net_t spikeBuff[MAX_SPIKE_BUFF_SIZE];
....
}
Then in the SpikePlot constructor I call:
bzero(&spikeBuff, sizeof(spikeBuff));
and set:
writeIdx =0;
Update 2: Ok something really weird is going on with my index variables. To test their sanity I changed getNetworkSpikePacket to:
void TetrodePlot::getNetworkSpikePacket(){
printf("Before:writeIdx:%d nspikes:%d totSpike:%d\n", writeIdx, nSpikes, totalSpikesRead);
spike_net_t s;
NetCom::rxSpike(net, &s);
// spikeBuff[writeIdx] = s;
writeIdx++;// = incrementIdx(writeIdx);
// if (writeIdx>=MAX_SPIKE_BUFF_SIZE)
// writeIdx = 0;
nSpikes += 1;
totalSpikesRead += 1;
printf("After:writeIdx:%d nspikes:%d totSpike:%d\n\n", writeIdx, nSpikes, totalSpikesRead);
}
And I get the following output to the console:
Before:writeIdx:0 nspikes:0 totSpike:0
After:writeIdx:1 nspikes:32763 totSpike:2053729378
Before:writeIdx:1 nspikes:32763 totSpike:2053729378
After:writeIdx:1 nspikes:0 totSpike:1
Before:writeIdx:1 nspikes:0 totSpike:1
After:writeIdx:32768 nspikes:32768 totSpike:260289889
Before:writeIdx:32768 nspikes:32768 totSpike:260289889
After:writeIdx:32768 nspikes:32768 totSpike:260289890
This method is the only method where I update their values (besides the constructor where I set them to 0). All other uses of these variables are read only.
I'm going to go on a limb here and say all your problems are caused by the zeroing out of the spike_net_t array.
In C++ you must not zero out objects with non-[insert word for 'struct-like' here] members. i.e. if you have an object that contains a complex object (a std string, a vector, etc. etc.) you cannot zero it out, as this destroys the initialization of the object done in the constructor.
This may be wrong but....
You seemed to move the wait loop logic out of the method and into the static wrapper. With nothing holding the worker thread open, perhaps that thread terminates after the first time you wait for a UDP packet, so second time around, sp in the static method now points to an instance that has left scope and been destructed?
Can you try to assert(sp) in the wrapper before trying to call its getNetworkSpikePacket()?
It looks like your reinterpret_cast might be causing some problems. When you call pthread_create, you are passing in "this" which is a SpikePlot*, but inside networkThreadFunc, you are casting it to a TetrodePlot*.
Are SpikePlot and TetrodePlot related? This isn't called out in what you've posted.
If you are allocating the spikeBuff array anywhere then make sure you are allocating sufficient storage so writeIdx is not an out-of-bounds index.
I'd also check that initNetworkRxThread is being called on an allocated instance of spikePlot object (and not on just a declared pointer).

Portable thread-safe lazy singleton

Greetings to all.
I'm trying to write a thread safe lazy singleton for future use. Here's the best I could come up with. Can anyone spot any problems with it? The key assumption is that static initialization occurs in a single thread before dynamic initialisations. (this will be used for a commercial project and company is not using boost :(, life would be a breeze otherwise :)
PS: Haven't check that this compiles yet, my apologies.
/*
There are two difficulties when implementing the singleton pattern:
Problem (a): The "global variable instantiation fiasco". TODO: URL
This is due to the unspecified order in which global variables are initialised. Static class members are equivalent
to a global variable in C++ during initialisation.
Problem (b): Multi-threading.
Care must be taken to ensure that the mutex initialisation is handled properly with respect to problem (a).
*/
/*
Things achieved, maybe:
*) Portable
*) Lazy creation.
*) Safe from unspecified order of global variable initialisation.
*) Thread-safe.
*) Mutex is properly initialise when invoked during global variable intialisation:
*) Effectively lock free in instance().
*/
/************************************************************************************
Platform dependent mutex implementation
*/
class Mutex
{
public:
void lock();
void unlock();
};
/************************************************************************************
Threadsafe singleton
*/
class Singleton
{
public: // Interface
static Singleton* Instance();
private: // Static helper functions
static Mutex* getMutex();
private: // Static members
static Singleton* _pInstance;
static Mutex* _pMutex;
private: // Instance members
bool* _pInstanceCreated; // This is here to convince myself that the compiler is not re-ordering instructions.
private: // Singletons can't be coppied
explicit Singleton();
~Singleton() { }
};
/************************************************************************************
We can't use a static class member variable to initialised the mutex due to the unspecified
order of initialisation of global variables.
Calling this from
*/
Mutex* Singleton::getMutex()
{
static Mutex* pMutex = 0; // alternatively: static Mutex* pMutex = new Mutex();
if( !pMutex )
{
pMutex = new Mutex(); // Constructor initialises the mutex: eg. pthread_mutex_init( ... )
}
return pMutex;
}
/************************************************************************************
This static member variable ensures that we call Singleton::getMutex() at least once before
the main entry point of the program so that the mutex is always initialised before any threads
are created.
*/
Mutex* Singleton::_pMutex = Singleton::getMutex();
/************************************************************************************
Keep track of the singleton object for possible deletion.
*/
Singleton* Singleton::_pInstance = Singleton::Instance();
/************************************************************************************
Read the comments in Singleton::Instance().
*/
Singleton::Singleton( bool* pInstanceCreated )
{
fprintf( stderr, "Constructor\n" );
_pInstanceCreated = pInstanceCreated;
}
/************************************************************************************
Read the comments in Singleton::Instance().
*/
void Singleton::setInstanceCreated()
{
_pInstanceCreated = true;
}
/************************************************************************************
Fingers crossed.
*/
Singleton* Singleton::Instance()
{
/*
'instance' is initialised to zero the first time control flows over it. So
avoids the unspecified order of global variable initialisation problem.
*/
static Singleton* instance = 0;
/*
When we do:
instance = new Singleton( instanceCreated );
the compiler can reorder instructions and any way it wants as long
as the observed behaviour is consistent to that of a single threaded environment ( assuming
that no thread-safe compiler flags are specified). The following is thus not threadsafe:
if( !instance )
{
lock();
if( !instance )
{
instance = new Singleton( instanceCreated );
}
lock();
}
Instead we use:
static bool instanceCreated = false;
as the initialisation indicator.
*/
static bool instanceCreated = false;
/*
Double check pattern with a slight swist.
*/
if( !instanceCreated )
{
getMutex()->lock();
if( !instanceCreated )
{
/*
The ctor keeps a persistent reference to 'instanceCreated'.
In order to convince our-selves of the correct order of initialisation (I think
this is quite unecessary
*/
instance = new Singleton( instanceCreated );
/*
Set the reference to 'instanceCreated' to true.
Note that since setInstanceCreated() actually uses the non-static
member variable: '_pInstanceCreated', I can't see the compiler taking the
liberty to call Singleton's ctor AFTER the following call. (I don't know
much about compiler optimisation, but I doubt that it will break up the ctor into
two functions and call one part of it before the following call and the other part after.
*/
instance->setInstanceCreated();
/*
The double check pattern should now work.
*/
}
getMutex()->unlock();
}
return instance;
}
No, this will not work. It is broken.
The problem has little/nothing to do with the compiler. It has to do with the order in which a second CPU will 'see' what the first CPU has done to memory. The memory (and caches) will be consistent, but the timing of WHEN each CPU decides to write or read each part of memory/cache is indeterminate.
So for CPU1:
instance = new Singleton( instanceCreated );
instance->setInstanceCreated();
Let's consider the compiler first. There is NO reason why the compiler doesn't reorder or otherwise alter these functions. Maybe like:
temp_register = new Singleton(instanceCreated);
temp_register->setInstanceCreated();
instance = temp_register;
or many other possibilities - like you said as long as single-threaded observed behaviour is consistent. This DOES include things like " break up the ctor into two functions and call one part of it before the following call and the other part after."
Now, it probably wouldn't break it up into 2 calls, but it would INLINE the ctor, particularly since it is so small. Then, once inlined, everything may be reordered, as if the ctor was broken in 2, for example.
In general, I would say not only is it possible that the compiler reordered things, it is probable - ie for the code you have, there is probably a reordering (once inlined, and inlining is likely) that is 'better' than the order given by the C++ code.
But let's leave that aside, and try to understand the real issues of double-checked locking.
So, let's just assume the compiler didn't reorder anything. What about the CPU? Or more importantly CPUs - plural.
The first CPU, 'CPU1' needs to follow the instructions given by the compiler, in particular, it needs to write to memory the things it has been told to write:
instance,
instanceCreated
other member variable of the Singleton (ie your Singleton does DO something, and has some state, doesn't it?)
Actually, that 'other member variable' stuff is really important. Important for your singleton - that's its real purpose right?, and important for our discussion. So let's give it a name: important_data. ie instance->important_data. And maybe instance->important_function(), which uses important_data. Etc.
As mentioned, let's assume the compiler has written the code such that these items are written in the order you are expecting, namely:
important_data - written inside the ctor, called from
instance = new Singleton(instanceCreated);
instance - assigned right after new/ctor returns
instanceCreated - inside setInstanceCreated()
Now, the CPU hands these writes off to the memory bus. Know what the memory bus does? IT REORDERS THEM. The CPU and architecture has the same constraints as the compiler - ie make sure this one CPU sees things consistently - ie single threaded consistent. So if, for example, instance and instanceCreated are on the same cache-line (highly likely, actually), they might be written together, and since they were just read, that cache-line is 'hot', so maybe they get written FIRST before important_data, so that that cache-line can be retired to make room for the cache-line where important_data lives.
Did you see that? instanceCreated and instance were just committed to memory BEFORE important_data. Note that CPU1 doesn't care, because it is living in a single-threaded world...
So now introduce CPU2:
CPU2 comes in, sees instanceCreated == true and instance != NULL and thus goes off and decides to call Singleton::Instance()->important_function(), which uses important_data, which is uninitialized. CRASH BANG BOOM.
By the way, it gets worse. So far, we've seen that the compiler could reorder, but we're pretending it didn't. Let's go one step further and pretend that CPU1 did NOT reorder any of the memory writing. Are we OK now?
No. Of course not.
Just as CPU1 decided to optimize/reorder its memory writes, CPU2 can REORDER ITS READS!
CPU2 comes in and sees
if (!instanceCreated) ...
so it needs to read instanceCreated. Ever heard of 'speculative execution'? (Great name for a FPS game, by the way). If the memory bus isn't busy doing anything, CPU2 might pre-read some other values 'hoping' that instanceCreated is true. ie it may pre-read important_data for example. Maybe important_data (or the uninitialized, possibly re-claimed-by-the-allocator memory that will become important_data) is already in CPU2's cache. Or maybe (more likely?) CPU2 just free'd that memory, and the allocator wrote NULL in its first 4 bytes (allocators often use that memory for their free-lists), so actually, the memory soon-to-become important_data may actually still be in the write queue of CPU2. In that case, why would CPU2 bother re-reading that memory, when it hasn't even finished writing it yet!? (it wouldn't - it would just get the values from its write-queue.)
Did that make sense? If not, imagine that the value of instance (which is a pointer) is 0x17e823d0. What was that memory doing before it became (becomes) the Singleton? Is that memory still in the write-queue of CPU2?...
Or basically, don't even think about why it might want to do so, but realize that CPU2 might read important_data first, then instanceCreated second. So even though CPU1 may have wrote them in order CPU2 sees 'crap' in important_data, then sees true in instanceCreated (and who knows what in instance!). Again, CRASH BANG BOOM. Or BOOM CRASH BANG, since by now you realize that the order isn't guaranteed...
It's usually better to have a non-lazy singleton which does nothing in its constructor, and then in GetInstance do a thread-safe call once to a function which allocates any expensive resources. You're already creating a Mutex non-lazily, so why not just put the mutex and some kind of Pimpl in your Singleton object?
By the way, this is easier on Posix:
struct Singleton {
static Singleton *GetInstance() {
pthread_once(&control, doInit);
return instance;
}
private:
static void doInit() {
// slight problem: we can't throw from here, or fail
try {
instance = new Singleton();
} catch (...) {
// we could stash an error indicator in a static member,
// and check it in GetInstance.
std::abort();
}
}
static pthread_once_t control;
static Singleton *instance;
};
pthread_once_t Singleton::control = PTHREAD_ONCE_INIT;
Singleton *Singleton::instance = 0;
There do exist pthread_once implementations for Windows and other platforms.
If you wish to see an in-depth discussion of Singletons, the various policies about their lifetime and the thread safety issues, I can only recommend a good read: "Modern C++ Design" by Alexandrescu.
The implementation is presented on the web in Loki, find it here!
And yes, it does hold in a single header file. So I would really encourage you to at least grab the file and read it, and better yet read the book to have the full-blown reflection.
At global scope in your code:
/************************************************************************************
Keep track of the singleton object for possible deletion.
*/
Singleton* Singleton::_pInstance = Singleton::Instance();
This makes your implementation not lazy. Presumably you want to set _pInstance to NULL at global scope, and assign to it after you construct the singleton inside Instance() before you unlock the mutex.
More food for thought from Meyers & Alexandrescu, with Singleton being the specific target: C++ and the Perils of Double-Checked Locking. It's a bit of a prickly problem.