Background: I have some classes implementing a subject/observer design pattern that I've made thread-safe. A subject will notify it's observers by a simple method call observer->Notified( this ) if the observer was constructed in the same thread as the notification is being made. But if the observer was constructed in a different thread, then the notification will be posted onto a queue to be processed later by the thread that constructed the observer and then the simple method call can be made when the notification event is processed.
So… I have a map associating threads and queues which gets updated when threads and queues are constructed and destroyed. This map itself uses a mutex to protect multi-threaded access to it.
The map is a singleton.
I've been guilty of using singletons in the past because "there will be only one in this application", and believe me - I have paid my penance!
One part of me can't help thinking that there really will be only one queue/thread map in an application. The other voice says that singletons are not good and you should avoid them.
I like the idea of removing the singleton and being able to stub it for my unit tests. Trouble is, I'm having a hard time trying to think of a good alternative solution.
The "usual" solution which has worked in the past is to pass in a pointer to the object to use instead of referencing the singleton. I think that would be tricky in this case, since observers and subjects are 10-a-penny in my application and it would very awkward to have to pass a queue/thread map object into the constructor of every single observer.
What I appreciate is that I may well have only one map in my application, but it shouldn't be in the bowels of the subject and observer class code where that decision is made.
Maybe this is a valid singleton, but I'd also appreciate any ideas on how I could remove it.
Thanks.
PS. I have read What's Alternative to Singleton and this article mentioned in the accepted answer. I can't help thinking that the ApplicationFactory it just yet another singleton by another name. I really don't see the advantage.
If the only purpose to trying to get rid of the singleton is from a unit test perspective, maybe replacing the singleton getter with something that you can swap in a stub for.
class QueueThreadMapBase
{
//virtual functions
};
class QeueueThreadMap : public QueueThreadMapBase
{
//your real implementation
};
class QeueueThreadMapTestStub : public QueueThreadMapBase
{
//your test implementation
};
static QueueThreadMapBase* pGlobalInstance = new QeueueThreadMap;
QueueThreadMapBase* getInstance()
{
return pGlobalInstance;
}
void setInstance(QueueThreadMapBase* pNew)
{
pGlobalInstance = pNew
}
Then in your test just swap out the queue/thread map implementation. At the very least this exposes the singleton a little more.
Some thoughts towards a solution:
Why do you need to enqueue notifications for observers that were created on a different thread? My preferred design would be to have the subject just notify the observers directly, and put the onus on the observers to implement themselves thread-safely, with the knowledge that Notified() might be called at any time from another thread. The observers know which parts of their state need to be protected with locks, and they can handle that better than the subject or the queue.
Assuming that you really have a good reason for keeping the queue, why not make it an instance? Just do queue = new Queue() somewhere in main, and then pass around that reference. There may only every be one, but you can still treat that as an instance and not a global static.
What's wrong with putting the queue inside the subject class? What do you need the map for?
You already have a thread reading from the singleton queue map. Instead of doing that simply make the map inside the subject class and provide two methods to subscribe an observer:
class Subject
{
// Assume is threadsafe and all
private QueueMap queue;
void Subscribe(NotifyCallback, ThreadId)
{
// If it was created from another thread add to the map
if (ThreadId != This.ThreadId)
queue[ThreadId].Add(NotifyCallback);
}
public NotifyCallBack GetNext()
{
return queue[CallerThread.Id].Pop;
}
}
Now any thread can call the GetNext method to start dispatching... of course it is all overly simplified, but it's just the idea.
Note: I'm working with the assumption that you already have an architecture around this model so that you already have a bunch of Observers, one or more subjects and that the threads already go to the map to do the notifications. This gets rid of the singleton but I'd suggest you notify from the same thread and let the observers handle concurrency issues.
My approach was to have the observers provide a queue when they registered with the subject; the observer's owner would be responsible for both the thread and the associated queue, and the subject would associate the observer with the queue, with no need for a central registry.
Thread-safe observers could register without a queue, and be called directly by the subject.
Your observers may be cheap, but they depend on the notification-queue-thread map, right?
What's awkward about making that dependency explicit and taking control of it?
As for the application factory Miško Hevery describes in his article, the biggest advantages are that 1) the factory approach doesn't hide dependencies and 2) the single instances you depend on aren't globally available such that any other object can meddle with their state. So using this approach, in any given top-level application context, you know exactly what's using your map. With a globally accessible singleton, any class you use might be doing unsavory things with or to the map.
What about adding a Reset method that returns a singleton to its initial state that you can call between tests? That might be simpler than a stub. It might be possible to add a generic Reset method to a Singleton template (deletes the internal singleton pimpl and resets the pointer). This could even include a registry of all singletons with a master ResetAll method to reset all of them!
Related
I'm working with code, which has a lot of observer pattern implementations. All of them are organized in such a manner:
Some interface to be implemented by observers:
class ObserverInterface {
virtual void FooOccurs() = 0;
};
Some class, which implements Register, Unregister and notifications:
class ObservableImpl {
public:
Register(ObserverInterface *observer);
Unregister(ObserverInterface *observer);
private:
void SomeMethod() {
// foo things
for(auto &observer: observers) {
observer.FooOccurs();
}
}
};
Every time there is a copy-paste of Register and Unregister as well as implementation of notification for each method of ObserverInterface. And every time a programmer has to remember about calling Unregister(), if its observer is going to be destructed.
I wish to enclose the observer pattern in two class templates. So far I've got something like that:
http://rextester.com/UZGG86035
But I'm not sure if I'm not reinventing the wheel. Are there any easier, commonly known approach to do that?
In C++11, I'd advise a token-based approach.
You register an observer. An observer is just a std::function<void(Signature...)>.
The registration function return a token, a std::shared_ptr<void>. So long as that returned shared_ptr is valid, the broadcaster will continue to broadcast to that listener.
The listener is now responsible for maintaining that std::shared_ptr lifetime.
Inside the broadcaster, you hold a weak_ptr and .lock() it before broadcasting. If I don't really need to unregister (usually I do not), I lazily clean up my list of weak_ptrs. Otherwise, I the shared_ptr I return has a deletion function that does the unregistration.
Alternatively, your listeners are shared_ptr<std::function<void(Args...)>>, and internally you store weak_ptr to same.
In this model, you cannot inject an unregistraiton function easily. However, it does mean they can use the aliasing constructor themselves to tightly bind the lifetime of the callback to themselves, assuming they are managed by a shared_ptr.
In my experience, simply having listeners maintain a std::vector<token> is sufficient. If they have a more complex listening relationship they can do more work, maintaining keys and the like.
Hybrid models are also possible.
Both of these are acceptable for not-thread-safe broadcasting, and can be written in a few dozen lines of code.
Thread-safe broadcasting gets tricky. Often I find you are better off using a message-passing pattern for this rather than the alternatives, as this reduces the difficulty in reasoning about concurrency slightly.
This also doesn't deal with situations where you want to register listeners willy-nilly, and broadcaster and listener lifetime is like popcorn.
I'm in a situation where I think that two implementations are correct, and I don't know which one to choose.
I've an application simulating card readers. It has a GUI where you choose which serial port, and speed to use, and a play and stop button.
I'm looking for the best implementation for reader construction.
I have a SimulatorCore class who's living as long as my application
SimulatorCore instantiate the Reader class. And it will be possible to simulate multiple readers on multiple serial port.
Two possibilities:
My Reader is a pointer (dynamic instantiation), I instantiate it when play button is hit, delete it when stop button is hit.
My Reader is an object (static instantiation), I instantiate it in SimulatorCore constructor then create and call Reader.init() and Reader.cleanup() into my Reader class and call these when play and stop are being hit
I personally see the functional side, and I clearly want to use pointer, and do not have any reader instantiate if no reader are simulated.
Someone say me that I should use static instantiation (Reason : for safety, and because "it's bad to use pointer when you have choice to not use them")
I'm not familiar with them, but I think I can also use smart pointer.
Code samples: 1st solution:
class SimulatorCore
{
play(){reader = new Reader();};
stop(){delete reader; reader = nullptr;};
private:
Reader *reader;
}
Code samples: 2nd solution:
class SimulatorCore
{
play(){reader.init();};
stop(){reader.cleanup();};
private:
Reader reader;
}
The code is unstest, I've juste wite it for illustration.
What is the best solution? Why?
You can easily use shared_ptr/unique_ptr:
class SimulatorCore
{
play(){_reader = make_shared<Reader>();};
stop(){_reader = nullptr};
private:
shared_ptr<Reader> _reader;
}
That will solve your problem right way, I guess.
Dynamic allocation gives some problems, for example, with throwing exception (there can be memory losing if between play() and stop() there will be thrown exception, for example, and stop() will never be called). Or you can just forget somewhere call stop() before destruction of SimulatorCore, it is possible if program is heavy.
If you never tried smart pointers, it is good chance to start doing it.
You should generally avoid performing dynamic allocation with new yourself, so if you were going to go with the 1st solution, you should use smart pointers instead.
However, the main question here is a question of logic. A real card reader exists in an idle state until it is being used. In the 2nd solution, what do init and cleanup do? Do they simply setup the card reader into an idle state or do they start simulating actually having a card being read? If it's the first case, I suggest that this behaviour should be in the constructor and destructor of Reader, and then creating a Reader object denotes bringing a card reader into existence. If it's the second case, then I'd say the 2nd solution is pretty much correct, just that the functions are badly named.
What seems most logical to me is something more like this:
class SimulatorCore
{
play(){reader.start();};
stop(){reader.stop();};
private:
Reader reader;
}
Yes, all I've done is change the function names for Reader. However, the functions now are not responsible for initialising or cleaning up the reader - that responsibility is in the hands of Reader's constructor and destructor. Instead, start and stop begin and end simulation of the Reader. A single Reader instance can then enter and exit this simulation mode multiple times in its lifetime.
If you later want to extend this idea to multiple Readers, you can just change the member to:
std::vector<Reader> readers;
However, I cannot know for certain that this is what you want because I don't know the logic of your program. Hopefully this will give you some ideas though.
Again, whatever you decide to do, you should avoid using new to allocate your Readers and then also avoid using raw pointers to refer to those Readers. Use smart pointers and their corresponding make_... functions to dynamically allocate those objects.
It clearly depends on how your whole program is organized, but in general, I think I would prefer the static approach, because of responsability considerations:
Suppose you have a separate class that handles serial communication. That class will send and receive messages and dispatch them to the reader class. A message may arrive at any time. The difference of the dynamic and static approaches is:
With the dynamic approach, the serial class must test if the reader actually exists before dispatching a message. Or the reader has to register and unregister itself in the serial class.
With the static approach, the reader class can decide for itself, if it is able to process the message at the moment, or not.
So I think the static approach is a bit easier and straight-forward.
However, if there is a chance that you will have to implement other, different reader classes in the future, the dynamic approach will make this extension easier, because the appropriate class can easily be instanciated at runtime.
So the dynamic approach offers more flexibility.
I have a singleton class for logging purpose in my Qt project. In each class except the singleton one, there is a pointer point to the singleton object and a signal connected to an writing slot in the singleton object. Whichever class wants to write log info just emit that signal. The signals are queued so it's thread-safe.
Please critique this approach from OOP point of view, thanks.
=============================================================================================
Edit 1:
Thank you all your applies, listening to opposite opinions is always a big learning.
Let me explain more about my approach and what I did in my code so far:
Exactly as MikeMB pointer, the singleton class has a static function like get_instance() that returns a reference to that singleton. I stored it in a local pointer in each class's constructor, so it will be destroyed after the constructor returns. It is convenient for checking if I got a null pointer and makes the code more readable. I don't like something as this:
if(mySingletonClass::gerInstance() == NULL) { ... }
connect(gerInstance(), SIGNAL(write(QString)), this, SLOT(write(QString)));
because it is more expensive than this:
QPointer<mySingletonClass> singletonInstance = mySingletonClass::getInstance();
if(singletonInstance.isNull) { ... }
connect(singletonInstance, SIGNAL(write(QString)), this, SLOT(write(QString)));
Calling a function twice is more expensive than creating a local variable from ASM's point of view because of push, pop and return address calculation.
Here is my singleton class:
class CSuperLog : public QObject
{
Q_OBJECT
public:
// This static function creates its instance on the first call
// and returns it's own instance just created
// It only returns its own instance on the later calls
static QPointer<CSuperLog> getInstance(void); //
~CSuperLog();
public slots:
void writingLog(QString aline);
private:
static bool ready;
static bool instanceFlag;
static bool initSuccess;
static QPointer<CSuperLog> ptrInstance;
QTextStream * stream;
QFile * oFile;
QString logFile;
explicit CSuperLog(QObject *parent = 0);
};
I call getInstance() at the beginning of main() so make sure it is read immediately for each other class whenever they need to log important information.
MikeMB:
Your approach is making a middle man sitting in between, it makes the path of the logging info much longer because the signals in Qt are always queued except you make direct connection. The reason why I can't make direct connection here is it make the class non-thread-safe since I use threads in each other classes. Yes, someone will say you can use Mutex, but mutex also creates a queue when more than one thread competing on the same resource. Why don't you use the existing mechanism in Qt instead of making your own?
Thank you all of your posts!
=========================================================
Edit 2:
To Marcel Blanck:
I like your approach as well because you considered resource competition.
Almost in every class, I need signals and slots, so I need QObject, and this is why I choose Qt.
There should be only one instance for one static object, if I didn't get it wrong.
Using semaphores is same as using signals/slots in Qt, both generates message queue.
There always be pros and cons regarding the software design pattern and the application performance. Adding more layers in between makes your code more flexible, but decreases the performance significantly on those lower-configured hardware, making your application depending one most powerful hardware, and that's why most of modern OSes are written in pure C and ASM. How to balance them is really a big challenge.
Could you please explain a little bit more about your static Logger factory approach? Thanks.
I do not like singletons so much because it is always unclean to use them. I have even read job descriptions that say "Knowledge of design patterns while knowing that Singleton isn't one to use". Singleton leads to dependecy hell and if you ever want to change to a completely different logging approach (mabe for testing or production), while not destroying the old one you, need to change a lot.
Another problem with the approch is the usage of signals. Yes get thread savety for free, and do not interrupt the code execution so much but...
Every object you log from needs to be a QObject
If you hunt crashes your last logs will not be printed because the logger had no time to do it before the program crashed.
I would print directly. Maybe you can have a static Logger factory that returns a logger so you can have one logger object in every thread (memory impact will still be very small). Or you have one that is threadsave using semaphores and has a static interface. In both cases the logger should be used via an interface to be more flexible later.
Also make sure that your approach prints directly. Even printf writes to a buffer before being printed and you need to flush it every time or you might never find crashes under bad circumstances, if hunting for them.
Just my 2 cents.
I would consider separating the fact that a logger should be unique, and how the other classes get an instance of the logger class.
Creating and obtaining an instance of the logger could be handled in some sort of factory that internally encapsulates its construction and makes only one instance if need be.
Then, the way that the other classes get an instance of the logger could be handled via Dependency injection or by a static method defined on the aforementioned factory. Using dependency injection, you create the logger first, then inject it into the other classes once created.
A singleton usually has a static function like get_instance() that returns a reference to that singleton, so you don't need to store a pointer to the singleton in every object.
Furthermore it makes no sense, to let each object connect its log signal to the logging slot of the logging object itself, because that makes each and every class in your project dependent on your logging class. Instead, let a class just emit the signal with the log information and establish the connection somewhere central on a higher level (e.g. when setting up your system in the main function). So your other classes don't have to know who is listening (if at all) and you can easily modify or replace your logging class and mechanism.
Btw.: There are already pretty advanced logging libraries out there, so you should find out if you can use one of them or at least, how they are used and adapt that concept to your needs.
==========================
EDIT 1 (response to EDIT 1 of QtFan):
Sorry, apparently I miss understood you because I thought the pointer would be a class member and not only a local variable in the constructor which is of course fine.
Let me also clarify what I meant by making the connection on a higher level:
This was solely aimed at where you make the connection - i.e. where you put the line
connect(gerInstance(), SIGNAL(write(QString)), this, SLOT(write(QString)));
I was suggesting to put this somewhere outside the class e.g. into the main function. So the pseudo code would look something like this:
void main() {
create Thread1
create Thread2
create Thread3
create Logger
connect Thread1 signal to Logger slot
connect Thread2 signal to Logger slot
connect Thread3 signal to Logger slot
run Thread1
run Thread2
run Thread3
}
This has the advantage that your classes don't have to be aware of the kind of logger you are using and whether there is only one or multiple or no one at all. I think the whole idea about signals and slots is that the emitting object doesn't need to know where its signals are processed and the receiving class doesn't have to know where the signals are coming from.
Of course, this is only feasible, if you don't create your objects / threads dynamically during the program's run time. It also doesn't work, if you want to log during the creation of your objects.
I have ideas for solving this, but I have a feeling this problem has been solved many times over.
I have implemented an observer pattern, similar to this:
struct IObserver {
virtual void notify(Event &event) = 0;
}
struct Notifier {
void registerObserver(IObserver* observer, EventRange range) {
lock(_mutex);
_observers[observer] = range;
}
void deregisterObserver(IObserver* observer) {
lock(_mutex);
_observers.erase(observers.find(observer));
}
void handleEvent() { /* pushes event onto queue */ }
void run();
mutex _mutex;
queue<Event> _eventQueue;
map<IObserver, EventRange> _observers;
}
The run method is called from a thread I create (it is actually owned by the notifier). The method looks something like...
void Notifier::run() {
while(true) {
waitForEvent();
Event event = _eventQueue.pop();
// now we have an event, acquire a lock and notify listeners
lock(_mutex);
BOOST_FOREACH(map<IObserver, EventRange>::value_type &value, _observers){
value.first->notify(event);
}
}
}
This works perfectly, until notify attempts to create an object that in turn attempts to register an observer. In this scenario, an attempt is made to acquire the already locked lock, and we end up in a deadlock. This situation can be avoided by using a recursive mutex. However, now consider the situation where a notification triggers removal of an Observer. Now map iterators are invalidated.
My question is, is there a pattern available that prevents this deadlock situation?
I think the real problem here is that you have an event that is manipulating the list of observers while you are iterating over the list of observers. If you are executing a notify(...) operation, you are iterating over the list. If you are iterating over the original list (and not a copy), then either registration or deregistration alters the list while you are iterating over it. I don't believe the iterators in a std::map would handle this well.
I have had this problem as well (just in a single threaded context) and found the only way to deal with it was to create a temporary copy of the observer list and iterate over that.
I also cached off removed observers during iteration so I would be sure that if I had observers A, B, and C, then if A leads to C being removed, the list still has C in it but C gets skipped.
I have an implementation of this for single threaded applications.
You could convert it to a threaded approach with a little work.
EDIT: I think the points of vulnerability for a multi-threaded application are the creation of the copy of the observer list (which you do when you enter notify(...)) and the addition of the observers to the "recently removed" list when observers detach. Don't place mutexes around these functions; place mutexes around the creation/update of the lists inside those function or create functions for just that purpose and place mutexes around them.
EDIT: I also strongly suggest creating some unit test cases (e.g. CPP Unit) to hammer the attach/detach/multi-detach scenarios from multiple threads. I had to do this in order to find one of the subtler problems when I was working on it.
EDIT: I specifically don't try to handle the case of new observers added as a consequence of a notify(...) call. That is to say, there is a list of recently removed but not a list of recently added. This is done to prevent a "notify->add->notify->add->ect." from happening, which can happen if somebody sticks a notify in a constructor.
The general approach is sketched out here.
The code is available on github here.
I have used this approach in several example solutions, which you can find on this site (and code for many of them on github as well).
Was this helpful?
I have a C++ multi-threaded application which run tasks in separate threads. Each task have an object which handles and stores it's output. Each task create different business logic objects and probably another threads or threadpools.
What I want to do is somehow provide an easy way for any of business logic objects which are run by task to access each task's output without manually passing "output" object to each business logic object.
What i see is to create output singleton factory and store task_id in TLS. But the problem is when business logic create a new thread or thread pool and those thread would not have task_id in TLS. In this way i would need to have an access to parent's thread TLS.
The other way is to simply grab all output since task's start. There would be output from different task in that time, but at least, better than nothing...
I'm looking for any suggestions or ideas of clean and pretty way of solving my problem. Thanks.
upd: yeah, it is not singletone, I agree. I just want to be able to access this object like this:
output << "message";
And that's it. No worry of passing pointers to output object between business logic classes. I need to have a global output object per task.
From an application point of view, they are not singletons, so why treating the objects like singletons?
I would make a new instance of the output storer and pass the (smart?) pointer to the new thread. The main function may put the pointer in the TLS, thus making the instance global per thread (I don't think that this is a wise design deision, but it is asked). When making a new (sub-?)thread, the pointer can again be passed. So according to me, no singletons or factories are needed.
If I understand you correctly, you want to have multiple class instances (each not necessarily the same class) all be able to access a common data pool that needs to be thread safe. I can think of a few ways to do this. The first idea is to have this data pool in a class that each of the other classes contain. This data pool will actually store it's data in a static member, so that way there is only one instance of the data even though there will be more than one instance of the data pool class. The class will then have accessor methods which access this static data pool (so that it is transparent). To make it thread safe you would then require the access to go through a mutex or something like that.