Extra brace brackets in C++ code - c++

Sometimes you run into code that has extra brace brackets, that have nothing to do with scope, only are for readability and avoiding mistakes.
For example:
GetMutexLock( handle ) ;
{
// brace brackets "scope" the lock,
// must close block / remember
// to release the handle.
// similar to C#'s lock construct
}
ReleaseMutexLock( handle ) ;
Other places I have seen it are:
glBegin( GL_TRIANGLES ) ;
{
glVertex3d( .. ) ;
glVertex3d( .. ) ;
glVertex3d( .. ) ;
} // must remember to glEnd!
glEnd() ;
This introduces a compiler error if the mutex isn't freed (assuming you remember both the } and the Release() call).
Is this a bad practice? Why?
If it isn't one, could it change how the code is compiled or make it slower?

The braces themselves are fine, all they do is limit scope and you won't slow anything down. It can be seen as cleaner. (Always prefer clean code over fast code, if it's cleaner, don't worry about the speed until you profile.)
But with respect to resources it's bad practice because you've put yourself in a position to leak a resource. If anything in the block throws or returns, bang you're dead.
Use Scope-bound Resource Management (SBRM, also known as RAII), which limits a resource to a scope, by using the destructor:
class mutex_lock
{
public:
mutex_lock(HANDLE pHandle) :
mHandle(pHandle)
{
//acquire resource
GetMutexLock(mHandle);
}
~mutex_lock()
{
// release resource, bound to scope
ReleaseMutexLock(mHandle);
}
private:
// resource
HANDLE mHandle;
// noncopyable
mutex_lock(const mutex_lock&);
mutex_lock& operator=(const mutex_lock&);
};
So you get:
{
mutex_lock m(handle);
// brace brackets "scope" the lock,
// AUTOMATICALLY
}
Do this will all resources, it's cleaner and safer. If you are in a position to say "I need to release this resource", you've done it wrong; they should be handled automatically.

Braces affect variable scope. As far as I know that is all they do.
Yes, this can affect how the program is compiled. Destructors will be called at the end of the block instead of waiting until the end of the function.
Often this is what you want to do. For example, your GetMutexLock and ReleaseMutexLock would be much better C++ code written like this:
struct MutexLocker {
Handle handle;
MutexLocker(handle) : handle(handle) { GetMutexLock(handle); }
~MutexLocker() { ReleaseMutexLock(handle); }
};
...
{
MutexLocker lock(handle);
// brace brackets "scope" the lock,
// must close block / remember
// to release the handle.
// similar to C#'s lock construct
}
Using this more C++ style, the lock is released automatically at the end of the block. It will be released in all circumstances, including exceptions, with the exceptions of setjmp/longjmp or a program crash or abort.

It's not bad practice. It does not make anything slower; it's just a way of structuring the code.
Getting the compiler to do error-checking & enforcing for you is always a good thing!

The specific placement of { ... } in your original example serves purely as formatting sugar by making it more obvious where a group of logically related statements begins and where it ends. As shown in your examples, it has not effect on the compiled code.
I don't know what you mean by "this introduces a compiler error if the mutex isn't freed". That's simply not true. Such use of { ... } cannot and will not introduce any compiler errors.
Whether it is a good practice is a matter of personal preference. It looks OK. Alternatively, you can use comments and/or indentation to indicate logical grouping of statements in the code, without any extra { ... }.
There are various scoping-based techniques out there, some of which have been illustrated by the other answers here, but what you have in your OP doesn't even remotely look anything like that. Once again, what you have in your OP (as shown) is purely a source formatting habit with superfluous { ... } that have no effect on the generated code.

It will make no difference to the compiled code, apart from calling any destructors at the end of that block rather than the end of the surrounding block, unless the compiler is completely insane.
Personally, I would call it bad practice; the way to avoid the kind of mistakes you might make here is to use scoped resource management (sometimes called RAII), not to use error-prone typographical reminders. I would write the code as something like
{
mutex::scoped_lock lock(mutex);
// brace brackets *really* scope the lock
} // scoped_lock destructor releases the lock
{
gl_group gl(GL_TRIANGLES); // calls glBegin()
gl.Vertex3d( .. );
gl.Vertex3d( .. );
gl.Vertex3d( .. );
} // gl_group destructor calls glEnd()

If you're putting code into braces, you should probably break it out into its own method. If it's a single discrete unit, why not label it and break it out functionally? That will make it explicit what the block does, and people who later read the code won't have to figure out.

Anything that improves readablity IMHO is good practice. If adding braces help with the readability, then go for it!
Adding additional braces will not change how the code is compiled. It won't make the running of the program any slower.

This is much more useful (IMHO) in C++ with object destructors; your examples are in C.
Imagine if you made a MutexLock class:
class MutexLock {
private:
HANDLE handle;
public:
MutexLock() : handle(0) {
GetMutexLock(handle);
}
~MutexLock() {
ReleaseMutexLock(handle);
}
}
Then you could scope that lock to just the code that needed it by providing an new scope with the braces:
{
MutexLock mtx; // Allocated on the stack in this new scope
// Use shared resource
}
// When this scope exits the destructor on mtx is called and the stack is popped

Related

please solve this memory leak problem in c++

class A
{
public:
unique_ptr<int> m_pM;
A() { m_pM = make_unique<int>(5); };
~A() { };
public:
void loop() { while (1) {}; } // it means just activating some works. for simplifying
};
int main()
{
_CrtSetDbgFlag(_CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF);
A a;
a.loop(); // if i use force quit while activating this function, it causes memory leak
}
is there any way to avoid memory leak when i use force quit while activating this program?
a.loop() is an infinite loop so everything after that is unreachable, so the compiler is within its right to remove all code after the call to a.loop(). See the compiler explorer for proof.
I believe that outside of some niche and very rare scenarios truly infinite loops like the one you wrote here are pretty useless, since they literally mean “loop indefinitely”. So what’s the compiler supposed to do? In some sense it just postpones the destruction of your object until some infinite time in the future.
What you usually do is use break inside such loop and break when some condition is met. A simplified example: https://godbolt.org/z/sxr7eG4W1
Here you can see the unique_ptr::default_delete in the disassembly and also see that the compiler is actually checking the condition inside the loop.
Note: extern volatile is used to ensure the compiler doesn’t optimise away the flag, since it’s a simplified example and the compiler is smart enough to figure out that the flag is not changed. In real code I’d advice against using volatile. Just check for the stop condition. That’s it.

C++ syntax I don't understand

I've found a C++ code that has this syntax:
void MyClass::method()
{
beginResetModel();
{
// Various stuff
}
endResetModel();
}
I've no idea why there are { } after a line ending with ; but it seems there is no problem to make it compile and run. Is it possible this as something to do with the fact that the code may be asynchronous (I'm not sure yet)? Or maybe the { } are only here to delimit a part of the code and don't really make a difference but honestly I doubt this. I don't know, does someone has any clue what this syntax mean ?
More info: There is no other reference to beginResetModel, resetModel or ResetModel in the whole project (searched with grep). Btw the project is a Qt one. Maybe it's another Qt-related macro I haven't heard of.
Using {} will create a new scope. In your case, any variable created in those braces will cease to exist at the } in the end.
beginResetModel();
{
// Various stuff
}
endResetModel()
The open and close braces in your code are a very important feature in C++, as they delimit a new scope. You can appreciate the power of this in combination with another powerful language feature: destructors.
So, suppose that inside those braces you have code that creates various objects, like graphics models, or whatever.
Assuming that these objects are instances of classes that allocate resources (e.g. textures on the video card), and those classes have destructors that release the allocated resources, you are guaranteed that, at the }, these destructors are automatically invoked.
In this way, all the allocated resources are automatically released, before the code outside the closing curly brace, e.g. before the call to endResetModel() in your sample.
This automatic and deterministic resource management is a key powerful feature of C++.
Now, suppose that you remove the curly braces, and your method looks like this:
void MyClass::method()
{
beginResetModel();
// {
// Various stuff
// }
endResetModel();
}
Now, all the objects created in the Various stuff section of code will be destroyed before the } that terminates the MyClass::method(), but after the call to endResetModel().
So, in this case, you end up with the endResetModel() call, followed by other release code that runs after it. This may cause bugs.
On the other hand, the curly braces that define a new scope enclosed in begin/endResetModel() do guarantee that all the objects created inside this scope are destroyed before endResetModel() is invoked.
{} delimits a scope. That means that any variable declared inside there is not accessible outside of it and is erased from memory once the } is reached. Here is an example:
#include <iostream>
using namespace std;
class MyClass{
public:
~MyClass(){
cout << "Destructor called" << endl;
}
};
int main(){
{
int x = 3;
MyClass foo;
cout << x << endl; //Prints 3
} //Here "Destructor called" is printed since foo is cleared from the memory
cout << x << endl; //Compiler error, x isn't defined here
return 0;
}
Usually scopes are used for functions, loops, if-statements, etc, but you're perfectly allowed to use scopes without any statement before them. This can be particularly useful to declare variables inside a switch (this answer explains why).
As others have pointed out, the curly braces create a new scope, but maybe the interesting thing is why would you want to do that - that is, what is the difference between using it and not using it. There are cases where scopes are obviously necessary, such as with if or for blocks; if you don't create a scope after them you can only have one statement. Another possible reason is that maybe you use one variable in one part of the function and do not one it to be used outside of that part, so you put it into its own scope. However, the main use of scopes out of control statements has to do with RAII. When you declare an instance variable (not a pointer or reference), it is always initialized; when it goes out of scope, it is always destroyed. This can be used to define blocks that require some setup at the beginning and some tear down at the end (if you are familiar with Python, similar to with blocks).
Take this example:
#include <mutex>
void fun(std::mutex & mutex) {
// 1. Perform some computations...
{
std::lock_guard<std::mutex> lock(mutex);
// 2. Operations in this scope are performed with the mutex locked
}
// 3. More computations...
}
In this example, part 2 is only run after the mutex has been acquired, and is released before part 3 starts. If you remove the additional scope:
#include <mutex>
void fun(std::mutex & mutex) {
// 1. Perform some computations...
std::lock_guard<std::mutex> lock(mutex);
// 2. Operations in this scope are performed with the mutex locked
// 3. More computations...
}
In this case the mutex is acquired before starting part 2, but it is held until part 3 is complete (possibly producing more interlocking between threads than necessary). Note, however, that in both cases there was no need to specify when the lock is released; std::lock_guard is responsible for both acquiring the lock on construction and releasing it on destruction (i.e. when it goes out of scope).

How should one log when an exception is triggered?

In a program I recently wrote, I wanted to log when my "business logic" code triggered an exception in third-party or project APIs. ( To clarify, I want to log when use of an an API causes an exception. This can be many frames above the actual throw, and may be many frames below the actual catch ( where logging of the exception payload can occur. ) ) I did the following:
void former_function()
{
/* some code here */
try
{
/* some specific code that I know may throw, and want to log about */
}
catch( ... )
{
log( "an exception occurred when doing something with some other data" );
throw;
}
/* some code here */
}
In short, if an exception occurs, create a catch-all clause, log the error, and re-throw. In my mind this is safe. I know in general catch-all is considered bad, since one doesn't have a reference to the exception at all to get any useful information. However, I'm just going to re-throw it, so nothing is lost.
Now, on its own it was fine, but some other programmers modified this program, and ended up violating the above. Specifically, they put a huge amount of code into the try-block in one case, and in another removed the 'throw' and placed a 'return'.
I see now my solution was brittle; it wasn't future-modification-proof.
I want a better solution that does not have these problems.
I have another potential solution that doesn't have the above issue, but I wonder what others think of it. It uses RAII, specifically a "Scoped Exit" object that implicitly triggers if std::uncaught_exception is not true on construction, yet is true on destruction:
#include <ciso646> // not, and
#include <exception> // uncaught_exception
class ExceptionTriggeredLog
{
private:
std::string const m_log_message;
bool const m_was_uncaught_exception;
public:
ExceptionTriggeredLog( std::string const& r_log_message )
: m_log_message( r_log_message ),
m_was_uncaught_exception( std::uncaught_exception() )
{
}
~ExceptionTriggeredLog()
{
if( not m_was_uncaught_exception
and std::uncaught_exception() )
{
try
{
log( m_log_message );
}
catch( ... )
{
// no exceptions can leave an destructor.
// especially when std::uncaught_exception is true.
}
}
}
};
void potential_function()
{
/* some code here */
{
ExceptionTriggeredLog exception_triggered_log( "an exception occurred when doing something with some other data" );
/* some specific code that I know may throw, and want to log about */
}
/* some code here */
}
I want to know:
technically, would this work robustly? Initially it seems to work, but I know there are some caveats about using std::uncaught_exception.
is there another way to accomplish what I want?
Note: I've updated this question. Specifically, I've:
added the try/catch that was initially missing, around the log function-call.
added tracking the std::uncaught_exception state on construction. This guards against the case where this object is created inside a 'try' block of another destructor which is triggered as part of exception stack-unwinding.
fixed the new 'potential_function' to create a named object, not a temporary object as before.
I have no comment on your method, but it seems interesting! I have another way that might also work for what you want, and might be a little more general-purpose. It requires lambdas from C++11 though, which might or might not be an issue in your case.
It's a simple function template that accepts a lambda, runs it and catches, logs and rethrows all exceptions:
template <typename F>
void try_and_log (char const * log_message, F code_block)
{
try {
code_block ();
} catch (...) {
log (log_message);
throw;
}
}
The way you use it (in the simplest case) is like this:
try_and_log ("An exception was thrown here...", [&] {
this_is_the_code ();
you_want_executed ();
and_its_exceptions_logged ();
});
As I said before, I don't know how it stacks against your own solution. Note that the lambda is capturing everything from its enclosing scope, which is quite convenient. Also note that I haven't actually tried this, so compile errors, logical problems and/or nuclear wars may result from this.
The problem I see here is that it is not easy to wrap this into a macro, and expecting your colleagues to write the [=] { and } parts correctly and all the time might be too much!
For wrapping and idiot-proofing purposes, you'll probably need two macros: a TRY_AND_LOG_BEGIN to emit the first line till the opening brace for the lambda and an TRY_AND_LOG_END to emit the closing brace and parenthesis. Like so:
#define TRY_AND_LOG_BEGIN(message) try_and_log (message, [&] {
#define TRY_AND_LOG_END() })
And you use them like this:
TRY_AND_LOG_BEGIN ("Exception happened!") // No semicolons here!
whatever_code_you_want ();
TRY_AND_LOG_END ();
Which is - depending on your perspective - is either a net gain or a net loss! (I personally prefer the straightforward function call and lambda syntax, which gives me more control and transparency.
Also, it is possible to write the log message at the end of the code block; just switch the two parameters of the try_and_log function.

Use of goto for cleanly exiting a loop

I have a question about use of the goto statement in C++. I understand that this topic is controversial, and am not interested in any sweeping advice or arguments (I usually stray from using goto). Rather, I have a specific situation and want to understand whether my solution, which makes use of the goto statement, is a good one or not. I would not call myself new to C++, but would not classify myself as a professional-level programmer either. The part of the code which has generated my question spins in an infinite loop once started. The general flow of the thread in pseudocode is as follows:
void ControlLoop::main_loop()
{
InitializeAndCheckHardware(pHardware) //pHardware is a pointer given from outside
//The main loop
while (m_bIsRunning)
{
simulated_time += time_increment; //this will probably be += 0.001 seconds
ReadSensorData();
if (data_is_bad) {
m_bIsRunning = false;
goto loop_end;
}
ApplyFilterToData();
ComputeControllerOutput();
SendOutputToHardware();
ProcessPendingEvents();
while ( GetWallClockTime() < simulated_time ) {}
if ( end_condition_is_satisified ) m_bIsRunning = false;
}
loop_end:
DeInitializeHardware(pHardware);
}
The pHardware pointer is passed in from outside the ControlLoop object and has a polymorphic type, so it doesn't make much sense for me to make use of RAII and to create and destruct the hardware interface itself inside main_loop. I suppose I could have pHardware create a temporary object representing a sort of "session" or "use" of the hardware which could be automatically cleaned up at exit of main_loop, but I'm not sure whether that idea would make it clearer to somebody else what my intent is. There will only ever be three ways out of the loop: the first is if bad data is read from the external hardware; the second is if ProcessPendingEvents() indicates a user-initiated abort, which simply causes m_bIsRunning to become false; and the last is if the end-condition is satisfied at the bottom of the loop. I should maybe also note that main_loop could be started and finished multiple times over the life of the ControlLoop object, so it should exit cleanly with m_bIsRunning = false afterwards.
Also, I realize that I could use the break keyword here, but most of these pseudocode function calls inside main_loop are not really encapsulated as functions, simply because they would need to either have many arguments or they would all need access to member variables. Both of these cases would be more confusing, in my opinion, than simply leaving main_loop as a longer function, and because of the length of the big while loop, a statement like goto loop_end seems to read clearer to me.
Now for the question: Would this solution make you uncomfortable if you were to write it in your own code? It does feel a little wrong to me, but then I've never made use of the goto statement before in C++ code -- hence my request for help from experts. Are there any other basic ideas which I am missing that would make this code clearer?
Thanks.
Avoiding the use of goto is a pretty solid thing to do in object oriented development in general.
In your case, why not just use break to exit the loop?
while (true)
{
if (condition_is_met)
{
// cleanup
break;
}
}
As for your question: your use of goto would make me uncomfortable. The only reason that break is less readable is your admittance to not being a strong C++ developer. To any seasoned developer of a C-like language, break will both read better, as well as provide a cleaner solution than goto.
In particular, I simply do not agree that
if (something)
{
goto loop_end;
}
is more readable than
if (something)
{
break;
}
which literally says the same thing with built-in syntax.
With your one, singular condition which causes the loop to break early I would simply use a break. No need for a goto that's what break is for.
However, if any of those function calls can throw an exception or if you end up needing multiple breaks I would prefer an RAII style container, this is the exact sort of thing destructors are for. You always perform the call to DeInitializeHardware, so...
// todo: add error checking if needed
class HardwareWrapper {
public:
HardwareWrapper(Hardware *pH)
: _pHardware(pH) {
InitializeAndCheckHardware(_pHardware);
}
~HardwareWrapper() {
DeInitializeHardware(_pHardware);
}
const Hardware *getHardware() const {
return _pHardware;
}
const Hardware *operator->() const {
return _pHardware;
}
const Hardware& operator*() const {
return *_pHardware;
}
private:
Hardware *_pHardware;
// if you don't want to allow copies...
HardwareWrapper(const HardwareWrapper &other);
HardwareWrapper& operator=(const HardwareWrapper &other);
}
// ...
void ControlLoop::main_loop()
{
HardwareWrapper hw(pHardware);
// code
}
Now, no matter what happens, you will always call DeInitializeHardware when that function returns.
UPDATE
If your main concern is the while loop is too long, then you should aim at make it shorter, C++ is an OO language and OO is for split things to small pieces and component, even in general non-OO language we generally still think we should break a method/loop into small one and make it short easy for read. If a loop has 300 lines in it, no matter break/goto doesn't really save your time there isn't it?
UPDATE
I'm not against goto but I won't use it here as you do, I prefer just use break, generally to a developer that he saw a break there he know it means goto to the end of the while, and with that m_bIsRunning = false he can easily aware of that it's actually exit the loop within seconds. Yes a goto may save the time for seconds to understand it but it may also make people feel nervous about your code.
The thing I can imagine that I'm using a goto would be to exit a two level loop:
while(running)
{
...
while(runnning2)
{
if(bad_data)
{
goto loop_end;
}
}
...
}
loop_end:
Instead of using goto, you should use break; to escape loops.
There are several alternative to goto: break, continue and return depending on the situation.
However, you need to keep in mind that both break and continue are limited in that they only affect the most inner loop. return on the other hand is not affected by this limitation.
In general, if you use a goto to exit a particular scope, then you can refactor using another function and a return statement instead. It is likely that it will make the code easier to read as a bonus:
// Original
void foo() {
DoSetup();
while (...) {
for (;;) {
if () {
goto X;
}
}
}
label X: DoTearDown();
}
// Refactored
void foo_in() {
while (...) {
for (;;) {
if () {
return;
}
}
}
}
void foo() {
DoSetup();
foo_in();
DoTearDown();
}
Note: if your function body cannot fit comfortably on your screen, you are doing it wrong.
Goto is not good practice for exiting from loop when break is an option.
Also, in complex routines, it is good to have only one exit logic (with cleaning up) placed at the end. Goto is sometimes used to jump to the return logic.
Example from QEMU vmdk block driver:
static int vmdk_open(BlockDriverState *bs, int flags)
{
int ret;
BDRVVmdkState *s = bs->opaque;
if (vmdk_open_sparse(bs, bs->file, flags) == 0) {
s->desc_offset = 0x200;
} else {
ret = vmdk_open_desc_file(bs, flags, 0);
if (ret) {
goto fail;
}
}
/* try to open parent images, if exist */
ret = vmdk_parent_open(bs);
if (ret) {
goto fail;
}
s->parent_cid = vmdk_read_cid(bs, 1);
qemu_co_mutex_init(&s->lock);
/* Disable migration when VMDK images are used */
error_set(&s->migration_blocker,
QERR_BLOCK_FORMAT_FEATURE_NOT_SUPPORTED,
"vmdk", bs->device_name, "live migration");
migrate_add_blocker(s->migration_blocker);
return 0;
fail:
vmdk_free_extents(bs);
return ret;
}
I'm seeing loads of people suggesting break instead of goto. But break is no "better" (or "worse") than goto.
The inquisition against goto effectively started with Dijkstra's "Go To Considered Harmful" paper back in 1968, when spaghetti code was the rule and things like block-structured if and while statements were still considered cutting-edge. ALGOL 60 had them, but it was essentially a research language used by academics (cf. ML today); Fortran, one of the dominant languages at the time, would not get them for another 9 years!
The main points in Dijkstra's paper are:
Humans are good at spatial reasoning, and block-structured programs capitalise on that because program actions that occur near each other in time are described near each other in "space" (program code);
If you avoid goto in all its various forms, then it's possible to know things about the possible states of variables at each lexical position in the program. In particular, at the end of a while loop, you know that that loop's condition must be false. This is useful for debugging. (Dijkstra doesn't quite say this, but you can infer it.)
break, just like goto (and early returns, and exceptions...), reduces (1) and eliminates (2). Of course, using break often lets you avoid writing convoluted logic for the while condition, getting you a net gain in understandability -- and exactly the same applies for goto.

Portable thread-safe lazy singleton

Greetings to all.
I'm trying to write a thread safe lazy singleton for future use. Here's the best I could come up with. Can anyone spot any problems with it? The key assumption is that static initialization occurs in a single thread before dynamic initialisations. (this will be used for a commercial project and company is not using boost :(, life would be a breeze otherwise :)
PS: Haven't check that this compiles yet, my apologies.
/*
There are two difficulties when implementing the singleton pattern:
Problem (a): The "global variable instantiation fiasco". TODO: URL
This is due to the unspecified order in which global variables are initialised. Static class members are equivalent
to a global variable in C++ during initialisation.
Problem (b): Multi-threading.
Care must be taken to ensure that the mutex initialisation is handled properly with respect to problem (a).
*/
/*
Things achieved, maybe:
*) Portable
*) Lazy creation.
*) Safe from unspecified order of global variable initialisation.
*) Thread-safe.
*) Mutex is properly initialise when invoked during global variable intialisation:
*) Effectively lock free in instance().
*/
/************************************************************************************
Platform dependent mutex implementation
*/
class Mutex
{
public:
void lock();
void unlock();
};
/************************************************************************************
Threadsafe singleton
*/
class Singleton
{
public: // Interface
static Singleton* Instance();
private: // Static helper functions
static Mutex* getMutex();
private: // Static members
static Singleton* _pInstance;
static Mutex* _pMutex;
private: // Instance members
bool* _pInstanceCreated; // This is here to convince myself that the compiler is not re-ordering instructions.
private: // Singletons can't be coppied
explicit Singleton();
~Singleton() { }
};
/************************************************************************************
We can't use a static class member variable to initialised the mutex due to the unspecified
order of initialisation of global variables.
Calling this from
*/
Mutex* Singleton::getMutex()
{
static Mutex* pMutex = 0; // alternatively: static Mutex* pMutex = new Mutex();
if( !pMutex )
{
pMutex = new Mutex(); // Constructor initialises the mutex: eg. pthread_mutex_init( ... )
}
return pMutex;
}
/************************************************************************************
This static member variable ensures that we call Singleton::getMutex() at least once before
the main entry point of the program so that the mutex is always initialised before any threads
are created.
*/
Mutex* Singleton::_pMutex = Singleton::getMutex();
/************************************************************************************
Keep track of the singleton object for possible deletion.
*/
Singleton* Singleton::_pInstance = Singleton::Instance();
/************************************************************************************
Read the comments in Singleton::Instance().
*/
Singleton::Singleton( bool* pInstanceCreated )
{
fprintf( stderr, "Constructor\n" );
_pInstanceCreated = pInstanceCreated;
}
/************************************************************************************
Read the comments in Singleton::Instance().
*/
void Singleton::setInstanceCreated()
{
_pInstanceCreated = true;
}
/************************************************************************************
Fingers crossed.
*/
Singleton* Singleton::Instance()
{
/*
'instance' is initialised to zero the first time control flows over it. So
avoids the unspecified order of global variable initialisation problem.
*/
static Singleton* instance = 0;
/*
When we do:
instance = new Singleton( instanceCreated );
the compiler can reorder instructions and any way it wants as long
as the observed behaviour is consistent to that of a single threaded environment ( assuming
that no thread-safe compiler flags are specified). The following is thus not threadsafe:
if( !instance )
{
lock();
if( !instance )
{
instance = new Singleton( instanceCreated );
}
lock();
}
Instead we use:
static bool instanceCreated = false;
as the initialisation indicator.
*/
static bool instanceCreated = false;
/*
Double check pattern with a slight swist.
*/
if( !instanceCreated )
{
getMutex()->lock();
if( !instanceCreated )
{
/*
The ctor keeps a persistent reference to 'instanceCreated'.
In order to convince our-selves of the correct order of initialisation (I think
this is quite unecessary
*/
instance = new Singleton( instanceCreated );
/*
Set the reference to 'instanceCreated' to true.
Note that since setInstanceCreated() actually uses the non-static
member variable: '_pInstanceCreated', I can't see the compiler taking the
liberty to call Singleton's ctor AFTER the following call. (I don't know
much about compiler optimisation, but I doubt that it will break up the ctor into
two functions and call one part of it before the following call and the other part after.
*/
instance->setInstanceCreated();
/*
The double check pattern should now work.
*/
}
getMutex()->unlock();
}
return instance;
}
No, this will not work. It is broken.
The problem has little/nothing to do with the compiler. It has to do with the order in which a second CPU will 'see' what the first CPU has done to memory. The memory (and caches) will be consistent, but the timing of WHEN each CPU decides to write or read each part of memory/cache is indeterminate.
So for CPU1:
instance = new Singleton( instanceCreated );
instance->setInstanceCreated();
Let's consider the compiler first. There is NO reason why the compiler doesn't reorder or otherwise alter these functions. Maybe like:
temp_register = new Singleton(instanceCreated);
temp_register->setInstanceCreated();
instance = temp_register;
or many other possibilities - like you said as long as single-threaded observed behaviour is consistent. This DOES include things like " break up the ctor into two functions and call one part of it before the following call and the other part after."
Now, it probably wouldn't break it up into 2 calls, but it would INLINE the ctor, particularly since it is so small. Then, once inlined, everything may be reordered, as if the ctor was broken in 2, for example.
In general, I would say not only is it possible that the compiler reordered things, it is probable - ie for the code you have, there is probably a reordering (once inlined, and inlining is likely) that is 'better' than the order given by the C++ code.
But let's leave that aside, and try to understand the real issues of double-checked locking.
So, let's just assume the compiler didn't reorder anything. What about the CPU? Or more importantly CPUs - plural.
The first CPU, 'CPU1' needs to follow the instructions given by the compiler, in particular, it needs to write to memory the things it has been told to write:
instance,
instanceCreated
other member variable of the Singleton (ie your Singleton does DO something, and has some state, doesn't it?)
Actually, that 'other member variable' stuff is really important. Important for your singleton - that's its real purpose right?, and important for our discussion. So let's give it a name: important_data. ie instance->important_data. And maybe instance->important_function(), which uses important_data. Etc.
As mentioned, let's assume the compiler has written the code such that these items are written in the order you are expecting, namely:
important_data - written inside the ctor, called from
instance = new Singleton(instanceCreated);
instance - assigned right after new/ctor returns
instanceCreated - inside setInstanceCreated()
Now, the CPU hands these writes off to the memory bus. Know what the memory bus does? IT REORDERS THEM. The CPU and architecture has the same constraints as the compiler - ie make sure this one CPU sees things consistently - ie single threaded consistent. So if, for example, instance and instanceCreated are on the same cache-line (highly likely, actually), they might be written together, and since they were just read, that cache-line is 'hot', so maybe they get written FIRST before important_data, so that that cache-line can be retired to make room for the cache-line where important_data lives.
Did you see that? instanceCreated and instance were just committed to memory BEFORE important_data. Note that CPU1 doesn't care, because it is living in a single-threaded world...
So now introduce CPU2:
CPU2 comes in, sees instanceCreated == true and instance != NULL and thus goes off and decides to call Singleton::Instance()->important_function(), which uses important_data, which is uninitialized. CRASH BANG BOOM.
By the way, it gets worse. So far, we've seen that the compiler could reorder, but we're pretending it didn't. Let's go one step further and pretend that CPU1 did NOT reorder any of the memory writing. Are we OK now?
No. Of course not.
Just as CPU1 decided to optimize/reorder its memory writes, CPU2 can REORDER ITS READS!
CPU2 comes in and sees
if (!instanceCreated) ...
so it needs to read instanceCreated. Ever heard of 'speculative execution'? (Great name for a FPS game, by the way). If the memory bus isn't busy doing anything, CPU2 might pre-read some other values 'hoping' that instanceCreated is true. ie it may pre-read important_data for example. Maybe important_data (or the uninitialized, possibly re-claimed-by-the-allocator memory that will become important_data) is already in CPU2's cache. Or maybe (more likely?) CPU2 just free'd that memory, and the allocator wrote NULL in its first 4 bytes (allocators often use that memory for their free-lists), so actually, the memory soon-to-become important_data may actually still be in the write queue of CPU2. In that case, why would CPU2 bother re-reading that memory, when it hasn't even finished writing it yet!? (it wouldn't - it would just get the values from its write-queue.)
Did that make sense? If not, imagine that the value of instance (which is a pointer) is 0x17e823d0. What was that memory doing before it became (becomes) the Singleton? Is that memory still in the write-queue of CPU2?...
Or basically, don't even think about why it might want to do so, but realize that CPU2 might read important_data first, then instanceCreated second. So even though CPU1 may have wrote them in order CPU2 sees 'crap' in important_data, then sees true in instanceCreated (and who knows what in instance!). Again, CRASH BANG BOOM. Or BOOM CRASH BANG, since by now you realize that the order isn't guaranteed...
It's usually better to have a non-lazy singleton which does nothing in its constructor, and then in GetInstance do a thread-safe call once to a function which allocates any expensive resources. You're already creating a Mutex non-lazily, so why not just put the mutex and some kind of Pimpl in your Singleton object?
By the way, this is easier on Posix:
struct Singleton {
static Singleton *GetInstance() {
pthread_once(&control, doInit);
return instance;
}
private:
static void doInit() {
// slight problem: we can't throw from here, or fail
try {
instance = new Singleton();
} catch (...) {
// we could stash an error indicator in a static member,
// and check it in GetInstance.
std::abort();
}
}
static pthread_once_t control;
static Singleton *instance;
};
pthread_once_t Singleton::control = PTHREAD_ONCE_INIT;
Singleton *Singleton::instance = 0;
There do exist pthread_once implementations for Windows and other platforms.
If you wish to see an in-depth discussion of Singletons, the various policies about their lifetime and the thread safety issues, I can only recommend a good read: "Modern C++ Design" by Alexandrescu.
The implementation is presented on the web in Loki, find it here!
And yes, it does hold in a single header file. So I would really encourage you to at least grab the file and read it, and better yet read the book to have the full-blown reflection.
At global scope in your code:
/************************************************************************************
Keep track of the singleton object for possible deletion.
*/
Singleton* Singleton::_pInstance = Singleton::Instance();
This makes your implementation not lazy. Presumably you want to set _pInstance to NULL at global scope, and assign to it after you construct the singleton inside Instance() before you unlock the mutex.
More food for thought from Meyers & Alexandrescu, with Singleton being the specific target: C++ and the Perils of Double-Checked Locking. It's a bit of a prickly problem.