I'm trying to expose a C interface for my C++ library. This notably involve functions that allow the user to create, launch, query the status, then release a background task.
The task is implemented within a C++ class, which members are protected from concurrent read/write via an std::mutex.
My issue comes when I expose a C interface for this background task. Basically I have say the following functions (assuming task_t is an opaque pointer to an actual struct containing the real task class):
task_t* mylib_task_create();
bool mylib_task_is_running(task_t* task);
void mylib_task_release(task_t* task);
My goal is to make any concurrent usage of these functions thread-safe, however I'm not sure exactly how, i.e. that if a client code thread calls mylib_task_is_running() at the same time that another thread calls mylib_task_release(), then everything's fine.
At first I thought about adding an std::mutex to the implementation of task_t, but that means the delete statement at the end of mylib_task_release() will have to happen while the mutex is not held, which means it doesn't completely solve the problem.
I also thought about using some sort of reference counting but I still end up against the same kind of issue where the actual delete might happen right after a hypothetical retain() function is called.
I feel like there should be a (relatively) simple solution to this but I can't quite put my hand on it. How can I make it so I don't have to force the client code to protect accesses to task_t?
if task_t is being deleted, you should ensure that nobody else has a pointer to it.
if one thread is deleting task_t and the other is trying to acquire it's mutex, it should be apparent that you should not have deleted the task_t.
shared_ptrs are a great help for this.
Related
In our application we deal with data that is processed in a worker thread and accessed in a display thread and we have a mutex that takes care of critical sections. Nothing special.
Now we thought about re-working our code where currently locking is done explicitely by the party holding and handling the data. We thought of a single entity that holds the data and only gives access to the data in a guarded fashion.
For this, we have a class called GuardedData. The caller can request such an object and should keep it only for a short time in local scope. As long as the object lives, it keeps the lock. As soon as the object is destroyed, the lock is released. The data access is coupled with the locking mechanism without any explicit extra work in the caller. The name of the class reminds the caller of the present guard.
template<typename T, typename Lockable>
class GuardedData {
GuardedData(T &d, Lockable &m) : data(d), guard(m) {}
boost::lock_guard<Lockable> guard;
T &data;
T &operator->() { return data; }
};
Again, a very simple concept. The operator-> mimics the semantics of STL iterators for access to the payload.
Now I wonder:
Is this approach well known?
Is there maybe a templated class like this already available, e.g. in the boost libraries?
I am asking because I think it is a fairly generic and usable concept. I could not find anything like it though.
Depending upon how this is used, you are almost guaranteed to end up with deadlocks at some point. If you want to operate on 2 pieces of data then you end up locking the mutex twice and deadlocking (unless each piece of data has its own mutex - which would also result in deadlock if the lock order is not consistent - you have no control over that with this scheme without making it really complicated). Unless you use a recursive mutex which may not be desired.
Also, how are your GuardedData objects passed around? boost::lock_guard is not copyable - it raises ownership issues on the mutex i.e. where & when it is released.
Its probably easier to copy parts of the data you need to the reader/writer threads as and when they need it, keeping the critical section short. The writer would similarly commit to the data model in one go.
Essentially your viewer thread gets a snapshot of the data it needs at a given time. This may even fit entirely in a cpu cache sitting near the core that is running the thread and never make it into RAM. The writer thread may modify the underlying data whilst the reader is dealing with it (but that should invalidate the view). However since the viewer has a copy it can continue on and provide a view of the data at the moment it was synchronized with the data.
The other option is to give the view a smart pointer to the data (which should be treated as immutable). If the writer wishes to modify the data, it copies it at that point, modifies the copy and when completes, switches the pointer to the data in the model. This would necessitate blocking all readers/writers whilst processing, unless there is only 1 writer. The next time the reader requests the data, it gets the fresh copy.
Well known, I'm not sure. However, I use a similar mechanism in Qt pretty often called a QMutexLocker. The distinction (a minor one, imho) is that you bind the data together with the mutex. A very similar mechanism to the one you've described is the norm for thread synchronization in C#.
Your approach is nice for guarding one data item at a time but gets cumbersome if you need to guard more than that. Additionally, it doesn't look like your design would stop me from creating this object in a shared place and accessing the data as often as I please, thinking that it's guarded perfectly fine, but in reality recursive access scenarios are not handled, nor are multi-threaded access scenarios if they occur in the same scope.
There seems to be to be a slight disconnect in the idea. Its use conveys to me that accessing the data is always made to be thread-safe because the data is guarded. Often, this isn't enough to ensure thread-safety. Order of operations on protected data often matters, so the locking is really scope-oriented, not data-oriented. You could get around this in your model by guarding a dummy object and wrapping your guard object in a temporary scope, but then why not just use one the existing mutex implementations?
Really, it's not a bad approach, but you need to make sure its intended use is understood.
In my C++ project I have a singleton class. During project execution, sometimes the same singleton class is accessed from two different thread simultaneously. Resulting in two instances of the singleton class is produced, which is a problem.
How to handle such a cases?
Then it's not a singleton :-)
You'll probably need to show us some code but your basic problem will be with the synchronisation areas.
If done right, there is no way that two threads can create two objects of the class. In fact, the class itself should be the place where the singleton nature is being enforced so that erroneous clients cannot corrupt the intent.
The basic structure will be:
lock mutex
if instance doesn't exist:
instance = new object
unlock mutex
Without something like mutex protection (or critical code section or any other manner in which you can guarantee at the language/library level that two threads can't run the code simultaneously), there's a possibility that thread one may be swapped out between the check and the instantiation, leading to two possible instances of your "singleton".
And, as others will no doubt suggest, singletons may well be a bad idea. I'm not quite in the camp where every use is wrong, the usual problem is that people treat them as "god" objects. They can have their uses but there's often a better way, though I won't presume to tell you you need to change that, since I don't know your use case.
If you get two different instances in different threads you are doing something wrong. Threads, unlike processes, share their memory. So memory allocated in one thread (e.g. for an object instance) is also usable in the other.
If your singleton getting two copies its not guarded with mutex. lock a mutex when getting/accessing/setting the internal object.
I believe You are done something like this if(!_instance)_instance = new Singleton() there lies a critical section. which you need to safe guard with a mutex.
Do not use a singleton, It is a well known Anti-Pattern.
A good read:
Singletons: Solving problems you didn’t know you never had since 1995
If you still want to persist and go ahead with it for reasons known only to you, What you need is a thread safe singleton implementation, something like this:
YourClass* YourClass::getInstance()
{
MutexLocker locker(YourClass::m_mutex);
if(!m_instanceFlag)
{
m_instance = new YourClass();
m_instanceFlag = true;
}
return m_instance;
}
Where MutexLocker is a wrapper class for an normally used Mutex, which locks the mutex when creating its instance and unlocks the mutex the function ends.
What is the best way of performing the following in C++. Whilst my current method works I'm not sure it's the best way to go:
1) I have a master class that has some function in it
2) I have a thread that takes some instructions on a socket and then runs one of the functions in the master class
3) There are a number of threads that access various functions in the master class
I create the master class and then create instances of the thread classes from the master. The constructor for the thread class gets passed the "this" pointer for the master. I can then run functions from the master class inside the threads - i.e. I get a command to do something which runs a function in the master class from the thread. I have mutex's etc to prevent race problems.
Am I going about this the wrong way - It kinda seems like the thread classes should inherit the master class or another approach would be to not have separate thread classes but just have them as functions of the master class but that gets ugly.
Sounds good to me. In my servers, it is called 'SCB' - ServerControlBlock - and provides access to services like the IOCPbuffer/socket pools, logger, UI access for status/error messages and anything else that needs to be common to all the handler threads. Works fine and I don't see it as a hack.
I create the SCB, (and ensure in the ctor that all services accessed through it are started and ready for use), before creating the thread pool that uses the SCB - no nasty singletonny stuff.
Rgds,
Martin
Separate thread classes is pretty normal, especially if they have specific functionality. I wouldn't inherit from the main thread.
Passing the this pointer to threads is not, in itself, bad. What you do with it can be.
The this pointer is just like any other POD-ish data type. It's just a chunk of bits. The stuff that is in this might be more than PODs however, and passing what is in effect a pointer to it's members can be dangerous for all the usual reasons. Any time you share anything across threads, it introduces potential race conditions and deadlocks. The elementary means to resolve those conflicts is, of course, to introduce synchronization in the form of mutexes, semaphores, etc, but this can have the suprising effect of serializing your application.
Say you have one thread reading data from a socket and storing it to a synchronized command buffer, and another thread which reads from that command buffer. Both threads use the same mutex, which protects the buffer. All is well, right?
Well, maybe not. Your threads could become serialized if you're not very careful with how you lock the buffer. Presumably you created separate threads for the buffer-insert and buffer-remove codes so that they could run in parallel. But if you lock the buffer with each insert & each remove, then only one of those operations can be executing at a time. As long as your writing to the buffer, you can't read from it and vice versa.
You can try to fine-tune the locks so that they are as brief as possible, but so long as you have shared, synchronized data, you will have some degree of serialization.
Another approach is to hand data off to another thread explicitly, and remove as much data sharing as possible. Instead of writing to and reading from a buffer as in the above, for example, your socket code might create some kind of Command object on the heap (eg Command* cmd = new Command(...);) and pass that off to the other thread. (One way to do this in Windows is via the QueueUserAPC mechanism).
There are pros & cons to both approaches. The synchronization method has the benefit of being somewhat simpler to understand and implement at the surface, but the potential drawback of being much more difficult to debug if you mess something up. The hand-off method can make many of the problems inherent with synchronization impossible (thereby actually making it simpler), but it takes time to allocate memory on the heap.
I have a server listening on a port for request. When the request comes in, it is dispatched to a singleton class Singleton. This Singleton class has a data structure RootData.
class Singleton {
void process();
void refresh();
private:
RootData mRootData;
}
In the class there are two functions: process: Work with the mRootData to do some processing and refresh: called periodically from another thread to refresh the mRootData with the latest changes from Database.
It is required that the access to mRootData be gaurded by Mutex.
I have the following questions:
1] If the class is a singleton and mRootData is inside that class, is the Mutex gaurd really necessary?
I know it is necessary for conflict between refresh/process. But from the server, i think there will be only one call to process happening at any give time ( coz the class is Singleton) Please correct me if my understanding is wrong.
2] Should i protect the i) data structure OR ii) function accessing the data structure. E.g.
i) const RootData& GetRootData()
{
ACE_Read_Guard guard(m_oMutexReadWriteLock);
return mRootData;
// Mutex is released when this function returns
}
// Similarly Write lock for SetRootData()
ii) void process()
{
ACE_Read_Guard guard(m_oMutexReadWriteLock);
// use mRootData and do processing
// GetRootData() and SetRootData() wont be mutex protected.
// Mutex is released when this function returns
}
3] If the answer to above is i) should i return by reference or by object?
Please explain in either case.
Thanks in advance.
1] If the class is a singleton and mRootData is inside that class, is the Mutex gaurd really necessary?
Yes it is, since one thread may call process() while another is calling refresh().
2] Should i protect the i) data structure OR ii) function accessing the data structure.
Mutex is meant to protect a common code path, i.e. (part of) the function accessing the shared data. And it is easiest to use when locking and releasing happens within the same code block. Putting them into different methods is almost an open invitation for deadlocks, since it is up to the caller to ensure that every lock is properly released.
Update: if GetRootData and SetRootData are public functions too, there is not much point to guard them with mutexes in their current form. The problem is, you are publishing a reference to the shared data, after which it is completely out of your control what and when the callers may do with it. 100 callers from 100 different threads may store a reference to mRootData and decide to modify it at the same time! Similarly, after calling SetRootData the caller may retain the reference to the root data, and access it anytime, without you even noticing (except eventually from a data corruption or deadlock...).
You have the following options (apart from praying that the clients be nice and don't do nasty things to your poor singleton ;-)
create a deep copy of mRootData both in the getter and the setter. This keeps the data confined to the singleton, where it can be guarded with locks. OTOH callers of GetRootData get a snapshot of the data, and subsequent changes are not visible to them - this may or may not be acceptable to you.
rewrite RootData to make it thread safe, then the singleton needs to care no more about thread safety (in its current setup - if it has other data members, the picture may be different).
Update2:
or remove the getter and setter altogether (possibly together with moving data processing methods from other classes into the singleton, where these can be properly guarded by mutexes). This would be the simplest and safest, unless you absolutely need other parties to access mRootData directly.
1.) Yes, the mutex is necessary. Although there is only one instance of the class in existence at any one time, multiple threads could still call process() on that instance at the same time (unless you design your app so that never happens).
2.) Anytime you use the value you should protect it with the mutex.
However, you don't mention a GetRootData and SetRootData in your class declaration above. Are these private (used only inside the class to access the data) or public (to allow other code to access the data directly)?
If you need to provide outside access to the data by making the GetRootData() function public, then you would need to return a copy, or your callers could then store a reference and manipulate the data after the lock has been released. Of course, then changes they made to the data wouldn't be reflected inside the singleton, which might not be what you want.
I'm something of an intermediate programmer, but relatively a novice to multi-threading.
At the moment, I'm working on an application with a structure similar to the following:
class Client
{
public:
Client();
private:
// These are all initialised/populated in the constrcutor.
std::vector<struct clientInfo> otherClientsInfo;
ClientUI* clientUI;
ClientConnector* clientConnector;
}
class ClientUI
{
public:
ClientUI(std::vector<struct clientInfo>* clientsInfo);
private:
// Callback which gets new client information
// from a server and pushes it into the otherClientsInfo vector.
synchClientInfo();
std::vector<struct clientInfo>* otherClientsInfo;
}
class ClientConnector
{
public:
ClientConnector(std::vector<struct clientInfo>* clientsInfo);
private:
connectToClients();
std::vector<struct clientInfo>* otherClientsInfo;
}
Somewhat a contrived example, I know. The program flow is this:
Client is constructed and populates otherClientsInfo and constructs clientUI and clientConnector with a pointer to otherClientsInfo.
clientUI calls synchClientInfo() anytime the server contacts it with new client information, parsing the new data and pushing it back into otherClientsInfo or removing an element.
clientConnector will access each element in otherClientsInfo when connectToClients() is called but won't alter them.
My first question is whether my assumption that if both ClientUI and ClientConnector access otherClientsInfo at the same time, will the program bomb out because of thread-unsafety?
If this is the case, then how would I go about making access to otherClientsInfo thread safe, as in perhaps somehow locking it while one object accesses it?
My first question is whether my assumption that if both ClientUI and ClientConnector access otherClientsInfo at the same time, will the program bomb out because of thread-unsafety?
Yes. Most implementations of std::vector do not allow concurrent read and modification. ( You'd know if you were using one which did )
If this is the case, then how would I go about making access to otherClientsInfo thread safe, as in perhaps somehow locking it while one object accesses it?
You would require at least a lock ( either a simple mutex or critical section or a read/write lock ) to be held whenever the vector is accessed. Since you've only one reader and writer there's no point having a read/write lock.
However, actually doing that correctly will get increasingly difficult as you are exposing te vector to the other classes, so will have to expose the locking primitive too, and remember to acquire it whenever you use the vector. It may be better to expose addClientInfo, removeClientInfo and const and non-const foreachClientInfo functions which encapsulate the locking in the Client class rather than having disjoint bits of the data owned by the client floating around the place.
See
Reader/Writer Locks in C++
and
http://msdn.microsoft.com/en-us/library/ms682530%28VS.85%29.aspx
The first one is probably a bit advanced for you. You can start with the Critical section (link 2).
I am assuming you are using Windows.
if both ClientUI and ClientConnector access otherClientsInfo at the same time, will the program bomb out because of thread-unsafety?
Yes, STL containers are not thread-safe.
If this is the case, then how would I go about making access to otherClientsInfo thread safe, as in perhaps somehow locking it while one object accesses it?
In the most simple case a mutual exclusion pattern around the access to the shared data... if you'd have multiple readers however, you would go for a more efficient pattern.
Is clientConnector called from the same thread as synchClientInfo() (even if it is all callback)?
If so, you don't need to worry about thread safety at all.
If you want to avoid simultanous access to the same data, you can use mutexes to protect the critical section. For exmample, mutexes from Boost::Thread
In order to ensure that access to the otherClientsInfo member from multiple threads is safe, you need to protect it with a mutex. I wrote an article about how to directly associate an object with a mutex in C++ over on the Dr Dobb's website:
http://www.drdobbs.com/cpp/225200269