Guice: Why must #Singleton-annotated classes be immutable/thread safe? - unit-testing

NOTE: This question is not about Singleton classes as described in Gamma94 (ensuring only one object ever gets instantiated.)
I read the Guice documentation about the #Singleton attribute:
Classes annotated #Singleton and #SessionScoped must be threadsafe.
Is this the case even if I don't intend to access the object from more than one thread? If so, why?

If an object is only ever accessed from a single thread, it doesn't need to be threadsafe even it's a Guice #Singleton. Guice doesn't do any multithreading internally that could cause a non-threadsafe singleton to break... the process of building the Injector is all done on the thread that calls Guice.createInjector and any dynamic provisioning is done on the thread that calls provider.get(). Of course, a singleton is only going to be created once and then just returned each time it's needed... when it's created depends on whether it's bound as an eager singleton (always created at startup) and whether the Injector is created in Stage.DEVELOPMENT (created only if and when needed) or Stage.PRODUCTION (created at startup).
It's very often the case that singletons can be accessed from multiple threads at the same time though (particularly in web applications), hence the warning. While many developers will understand that a singleton needs to be threadsafe in that case, others may not and I imagine it was considered worth it to warn them.

Related

Extending lifetime of Boost log core

I want to extend lifetime of the boost::log::core for termination process.
I know that this is not recommended in the documentation. But I tried to use Schwarz counter for core to keep it until last destructor of global variable used it.
Core uses the shared pointer with Meyer's singleton. is it possible to implement Schwarz counter or nifty initialization core ?
The core::get() method returns a shared_ptr to the core, which is what implements the Schwarz counter. You can call this method during the normal operation of the program (i.e. while main() is executing) and save that pointer in your data structures that are guaranteed to be present during termination. While the shared_ptr exists, you can access the core through that pointer (but not necessarily through core::get()).
Note that all loggers provided by the library also save the pointer to the core internally. For example, if you're performing termination in an object destructor, make a logger a member of that object and you will be able to emit simple log records during the destruction.
Note that the core is not the only singleton the library maintains, and other singletons are not protected from destruction by this method. For example, you cannot use global loggers during program termination.

Is it this safe to do C++ Singleton first instance creation?

To my understanding, if the Singleton::instance() is called in different thread, there might be some problem if both call refer to the first construction of the actual instance.
So if I move the first Singleton::instance() call to the very beginning of the program where no other threads are even created, will this be thread safe now?
Of course, all its member variable are protected by mutex guard when used.
This might open your eyes to a thread-safe Singleton and how easy it isn't.
http://silviuardelean.ro/2012/06/05/few-singleton-approaches/
As per before, it's not very robust if you're requiring it to be created before any threads are kicked off.
Worth noting though, if you compile with C++11, then doing as Brian said (static storage + static method) guarantees thread safety. With any previous versions you will need a mutex, and will run into the caveats mentioned in the link I shared.
So if I move the first Singleton::instance() call to the very beginning of the program where no other threads are even created, will this be thread safe now?
Yes, but this element is not within the Singleton's design and would likely be more robust if it were.
You can often allocate it at file scope or in function scope with static storage within a static method. Verify that your compiler generates exclusion around it or add your own mutex there.
Yes, performing the initial instantiation when you can guarantee there is just one thread extant clearly protects it from other threads causing race conditions.
It doesn't feel terribly robust, though. At the very least, plaster the area with warning comments.
Maybe then you don't need lazy initialization of singleton instance?
If you actually want it then you can protect singleton instance with mutex when you construct it.
Just remember not to put in the header.
If you put the implementation in the header, it may be generated in every compilation unit that uses it. Which means it won't be single.
Also don't compile it in static libraries. This can also lead to multiple instances if the code is linked and merges into several non-static libraries.
If there is no additional thread created yet and you make that move before those threads creation, I don't see a real scenario where you might have problems by using the singleton you already created in any new created multi-threaded environment.
The main thread-safe problem of singleton pattern into a multi-threaded environment is about how to prevent two or more "singleton" instances creation by different threads. I have described this scenario into "Multi-threaded environment" section here.

Will COM marshalling be (ever) neccessary for an object with ThreadingModel Both?

This is triggered by another question.
Specifically, I have a in process COM class, that is defined in the CLSID registry as having a ThreadingModel of Both.
Our process activates this object through CoCreateInstance (not CoCreateInstanceEx, if that even matters for an in-proc dll server)
Given a threading model of Bothand given th rules listed in the docs:
Threading model of server | Apartment server is run in
------------------------------------------------------
Both | Same apartment as client
and given what Hans writes in the other answer:
... Marshaling occurs when the client call needs to be made on a
different thread. ... can happen when the ThreadingModel specified in
the comClass element demands it. In other words, when the COM object
was created on one thread but is called on another and the server is
not thread-safe.
my tentative conclusion would be that such an object will never need implicit marshalling of calls to its interfaces, since the object will always live in the same apartment as its client.
Is that correct, even if the client process is running as STA?
Yes, there may be marshaling.
If the client of your COM class is running in an STA and you attempt to invoke your class from another apartment, it will have to marshal to the apartment that it was created in.
The COM terminology can be really confusing. When you refer to a 'client' in this case, you're really referring to a thread, not the entire application (as it would imply).
Both just means that the threading model of the server conforms to the client that instantiates it. That is, when you instantiate your class, it takes on the threading model of the thread it was created on. Since you're instantiating the server in an STA, your server will use STA, meaning it can only be invoked on the thread that created it; if another thread tries to invoke it, it will marshal to the thread it was created on.
I can't help myself posting this, although it is not a direct answer to the question.
There's a brilliant MSKB article from the golden ages of COM: INFO: Descriptions and Workings of OLE Threading Models. Still there, and has all the relevant info. The point is, you should not worry about whether there is marshaling or not, if you follow the rules. Just register your object as ThreadingModel=Both, aggregate the Free-Threaded Marshaler with CoCreateFreeThreadedMarshaler, and be done. COM will do the marshaling if needed, in the best possible way. Depending on the client's apartment model, the client code may receive the direct pointer to your interface, if it follows the rules too.
Any "alien" interface that you may receive when a method of your interface gets called, will be valid in the scope of the call, because you stay on the same thread. If you don't need to store it, that's all that matters.
If however you do need to cache the "alien" interface, the right way of doing this would be to store it using CoMarshalInterThreadInterfaceInStream/CoGetInterfaceAndReleaseStream:
To store it:
Enter critical section;
call CoMarshalInterThreadInterfaceInStream and store the IStream pointer in a member field;
Leave critical section;
To retrieve it
Enter critical section;
call CoGetInterfaceAndReleaseStream to retrieve the interface
call CoMarshalInterThreadInterfaceInStream and store it again as IStream for any future use
Leave critical section;
Use the interface in the scope of the current call
To release it:
When you no longer need keeping it, just release the stored IStream (inside the critical section).
If the "alien" object is free-threaded too, and the things are happening inside the same process, you will likely be dealing with a direct interface pointer after CoGetInterfaceAndReleaseStream. However, you should not make any assumptions, and you really don't need to know if the object your dealing with is the original object or a COM marshaller proxy.
This can be slightly optimized by using CoMarshalInterface w/ MSHLFLAGS_TABLESTRONG / CoUnmarshalInterface / IStream::Seek(0, 0) / CoReleaseMarshalData instead of CoGetInterfaceAndReleaseStream/CoGetInterfaceAndReleaseStream, to unmarshal the same interface as many times as needed without releasing the stream.
More complex (and possibly more efficient) caching scenarios are possible, involving Thread Local Storage. However, I believe that would be an overkill. I did not do any timing, but I think the overhead of CoMarshalInterThreadInterfaceInStream/CoGetInterfaceAndReleaseStreamis really low.
That said, if you need to maintain a state that stores any resources or objects which may require thread affinity, other than aforementioned COM interfaces, you should not mark your object as ThreadingModel=Both or aggregate the FTM.
Yes, marshalling is still possible. A couple of examples:
the object is instantiated from an MTA thread and so placed into an MTA apartment and then its pointer is passed into any STA thread and that STA thread calls methods of the object. In this case an STA thread can only access the object via marshalling.
the object is instantiated from an STA thread and so placed into an STA apartment belonging to that thread and then its pointer is passed into another STA thread or an MTA thread. In both cases those threads can only access the object via marshalling.
In fact you can expect no marshalling only in the following two cases:
the object is instantiated from an MTA thread and then only accessed by MTA threads - both the one that instantiated the object and all other MTA threads of the same process.
the object is instantiated from an STA thread and then only accessed by that very thread
and in all other cases marshalling will kick in.
ThreadingModel = Both simply means that the COM server author can give a guarantee that his code is thread-safe but cannot give the same guarantee that other code he didn't write will be called in a thread-safe way. The most common case of getting such foreign code to execute is through callbacks, connection points being the most common example (usually called "events" in client runtimes).
So if the server instance was created in an STA then the client programmer is going to expect the events to run on that same thread. Even if a server method that fires such an event was called from another thread. That requires that call to be marshaled.

COM reference counting - thread safety

If I have a COM object, is it required for the AddRef() and Release() methods to be thread-safe- i.e., that I have to use atomic operations for my ref count?
Yes, if you are using the free threaded aparement model, use InterlockedIncrement() and InterlockedDecrement() to handle the ref count.
I think the answer is no. It is not required. If you want your COM object to be thread safe then those should be thread safe. Otherwise they do not have to be.
E.g. if you look here: The Rules of the Component Object Model it is not mentioned as a requirement.
Also The COM Programmer's Cookbook (Building a COM Component) you can see a sample object without thread safe reference counting.
Microsoft code snippet:
ULONG COutside::AddRef (void)
{
return ++ m_cRef;
}
In practice most implementations would do this because otherwise the COM objects would not be thread safe. If you know the object will only be used in one thread I believe it is an allowed optimization. Not all COM objects are thread safe, I've worked with a few that weren't.
To deal with the fact that COM objects may or may not be thread safe COM offers different "apartments" in which COM objects are created. In a single threaded apartment only a single thread can access objects within that apartment whereas in multi threaded apartment objects can be shared between multiple threads. Quoting from Understanding and Using COM Threading Models:
"Although multi-threaded apartments,
sometimes called free-threaded
apartments, are a much simpler model,
they are more difficult to develop for
because the developer must implement
the thread synchronization for the
objects, a decidedly nontrivial task."
Yup. That is required. COM is a simple binary standard and if you use Free Threaded Apartments, you'll get truly free threaded accesses
This will depend on the threading model you use and the kind of object. Please see the description of _ATL_*_THREADED macros. Those macros affect thread-safety of AddRef()/Release() of "usual" classes and of factories.
If you use a "too loose" macro you violate thread-safety requirements and your program might malfunction. If you choose a "too tight" macro you might lose some performance, but you as usual don't know if you care before you profile.
Here's how you choose the right macro (and this explains whether AddRef()/Release() have to be thread-safe).
If all the classes of a single server have no threading model specified (Main STA) then there's no chance of concurrent access to either objects or factories and they all can have non-threadsafe AddRef()/Release() and you get this by specifying _ATL_SINGLE_THREADED macro.
Otherwise if at least one class has "Apartment" model specified you need thread-safe AddRef()/Release() for the factory of that object but still can have a non-threadsafe AddRef()/Release() in the object itself and you get this by specifying _ATL_APARTMENT_THREADED macro. This macro will make all factories have thread-safe AddRef()/Release() and all object - non-threadsafe AddRef()/Release().
Finally if at least one class has "Both" or "Free" threading model specified you need AddRef()/Release() to be thread-safe in both that class and in the factory and you have to either specify _ATL_FREE_THREADED or just not specify any of the above - this "most tight" macro effect will be on by default. So the default configuration for COM objects created using ATL is to have thread-safe AddRef()/Release() for all objects - both served objects and factories.
That said you don't always need AddRef()/Release() to be thread-safe, but you usually should unless you know for sure that you can get without it and that getting without it lets you gain performance.

Static objects and singletons

I'm using boost singletons (from serialization).
For example, there are some classes which inherit boost::serialization::singleton. Each of them has such define near it's definition (in h-file):
#define appManager ApplicationManager::get_const_instance()
class ApplicationManager: public boost::serialization::singleton<ApplicationManager> { ... };
And I have to call some method from that class each update (nearly 17 ms), for example, 200 times. So the code is like:
for (int i=0; i < 200; ++i)
appManager.get_some_var();
I looked with gprof at function call stack and saw that boost::get_const_instance calls each time. Maybe, in release-mode compiler will optimize this?
My idea is to make some global variable like:
ApplicationManager &handle = ApplicationManager::get_const_instance();
And use handle, so it wouldn't call get_const_instnace each time. Is that right?
Instead of using the Singleton anti-pattern, just a global variable and be done with it. It's more honest.
The main benefit of Singleton is when you want lazy initialization, or more fine grained control over initialization order than a global variable would allow you. It doesn't look like either of these things are a concern for you, so just use a global.
Personally, I think designs with global variables or Singletons are almost certainly broken. But to each h(is/er) own.
If you are bent on using a Singleton, the performance concern you raise is interesting, but likely not an issue as the function call overhead is probably less than 100ns. As was pointed out, you should profile. If it really concerns you a whole lot, store a local reference to the Singleton before the loop:
ApplicationManager &myAppManager = appManager;
for (int i=0; i < 200; ++i)
myAppManager.get_some_var();
BTW, using that #define in that way is a serious mistake. Almost all cases where you use the preprocessor for anything other than conditional compilation based on compile-time flags is probably a poor use. Boost does make extensive use of the pre-processor, but mostly to get around C++ limitations. Do not emulate it.
Lastly, that function is probably doing something important. One of the jobs of a get_instance method for Singletons is to avoid having multiple threads initialize the same Singleton at the same time. With global variables this shouldn't be an issue because they should be initialized before you've started any threads.
Is it really a problem? I mean, does your application really suffers for this behaviour?
I would despise such a solution because, in all effects, you are countering one of the benefits of the Singleton pattern, namely to avoid global variables. If you want to use a global variable, then don't use Singleton at all, right?
Yes, that is certainly a possible solution. I'm not entirely sure what boost is doing with its singleton behind the scenes; you can look that up yourself in the code.
The singleton pattern is just like creating a global object and accessing the global object, in most respects. There are some differences:
1) The singleton object instance is not created until it is first accessed, whereas the global object is created at program startup.
2) Because the singleton object is not created until it is first accessed, it is actually created when the program is running. Thus the singleton instance has access to other fully constructed objects in the program when the constructor is actually running.
3) Because you access the singleton through the getInstance() method (boost's get_const_instance method) there is a little bit of overhead for executing that method call.
So if you're not concerned about when the singleton is actually created, and can live with it being created at program startup, you could just go with a global variable and access that. If you really need the singleton created after the program starts up, then you need the singleton. In that case, you can grab and hold onto a reference to the object returned by get_const_instance() and use that reference.
Something that bit me in the past though you should be aware of. You're actually getting a reference to the object that is owned by the singleton. You don't own that object.
1) Do not write code that would cause the destructor to execute (say, using a shared pointer on the returned reference), or write any other code that could cause the object to end up in a bad state.
2) In a multi-threaded app, take care to correctly lock fields in the object if the object may be used by more than one thread.
3) In a multi-threaded app, make sure that all threads that hold onto references to the object terminate before the program is unloaded. I've seen a case where the singleton's code resides in one DLL library; a thread that holds the reference lives in another DLL library. When the program ends, the thread was still active. The DLL holding the singleton's code was unloaded first; the thread that was still alive tried to do something to the singleton's object and caused a crash.
Singletons have their advantages in situations where you want to control the level of access to something at process or application scope beyond what a global variable could achieve in a more elegant way.
However most singleton objects provided in a library will be designed to ensure some level of thread safety and most likely access to the instance is being locked via a mutex or other critical section of some kind which can affect performance.
In the case of a game or 3d application where performance is key you may want to consider making your own lightweight singleton if thread safety is not a concern and gain some performance.