I have a requirement for an MessageQueue which will store objects and 2 threads will act as producer and cousumer. i am planning to use std::queue to store objects. I am working in MFC and C++ on VC 6.0 .For synchronization between 2 threads which syncronization primitives could be used as I can't use C++ 11 on VC 6.0.
Please provide me some direction? I am planning to use CriticalSection and Event. Is there any better way to handle this?
Is std::queue is thread-safe?
I'm not well versed in the MFC synchronization tools, but what you want to do is definitely possible.
EDIT: Based on what people are saying in the comments, it looks like CCriticalSection is a better choice than CMutex in this case so I've updated my answer.
For signaling between threads, using semaphores would be a good choice. Wikipedia has a nice example of pseudocode using semaphores for the producer-consumer / bounded-buffer problem. Note that you will need two semaphores, one to count how many items are in the queue and how many slots the queue has left. Note that with more than two threads, you also need a mutex-type or critical section synchronization mechanism in addition to the semaphores (see the wiki link, second code example). This may seem counter-intuitive, but keep in mind the fact that the producer and the consumer are waiting on two different queue conditions before they act.
Based on what I've read, a good option is to make your own wrapper class with a CCriticalSection member, then when you want to lock the resource (like you would within one of your wrapper class' get/set member function) you call CCriticalSection's Lock() method (shown here). When you're done with the shared resource, remember to call Unlock() on the CCriticalSection.
Adapted From MSDN:
#include <queue>
class SharedQueue
{
static std::queue<int> _qShared; //shared resource
static CCriticalSection _critSect;
public:
SharedQueue(void) {}
~SharedQueue(void) {}
void push(int); //locks, modifies, and unlocks shared resource
};
//Declaration of static members and push_back
std::queue<int> SharedQueue::_qShared;
CCriticalSection SharedQueue::_critSect;
void SharedQueue::push(int item)
{
_critSect.Lock();
_qShared.push(item);
_critSect.Unlock();
}
As pointed out in the comments and in the MSDN docs, CCriticalSection is useful when access to your shared resource does not cross process boundaries. It is also more performant in this case than CMutex.
You need to wrap the std::queue, since it is not thread safe. Assume any container in the STL is not thread safe unless the documentation specifically mentions that it is.
Related
According to Boost documentation boost::mutex and boost::timed_mutex are supposed to be different. The first one implements Lockable Concept, and the second - TimedLockable Concept.
But if you take a look at the source, you can see they're basically the same thing. The only difference are lock typedefs. You can call timed_lock on boost::mutex or use boost::unique_lock with timeout just fine.
typedef ::boost::detail::basic_timed_mutex underlying_mutex;
class mutex:
public ::boost::detail::underlying_mutex
class timed_mutex:
public ::boost::detail::basic_timed_mutex
What's the rationale behind that? Is it some remnant of the past, is it wrong to use boost::mutex as a TimedLockable? It's undocumented after all.
I have not looked at the source, but I used these a few days ago, and the timed mutexes function differently. They block until the time is up, then return. A unique lock will block until it can get the lock.
A try lock will not block, and you can then test to see if it has ownership of the lock. A timed lock will block for the specified amount of time, then behave as a try lock - that is, cease blocking, and you can test for ownership of the lock.
I believe that internally some of the different boost locks are typedefs for unique lock since they all use unique locking. The typedef names are there so that you can keep track of what you are using different ones for, even though you could use different functionality and confuse your client code.
Edit: here is an example of a timed lock:
boost::timed_mutex timedMutexObj;
boost::mutex::scoped_lock scopedLockObj(timedMutexObj, boost::get_system_time() + boost::posix_time::seconds(60));
if(scopedLockObj.owns_lock()) {
// proceed
}
For reference: http://www.boost.org/doc/libs/1_49_0/doc/html/thread/synchronization.html#thread.synchronization.mutex_concepts.timed_lockable.timed_lock
Edit again: to provide a specific answer to your question, yes, it would be wrong to use boost::mutex as a TimedLockable because boost::timed_mutex is provided for this purpose. If they are the same thing in the source and this is undocumented, this is unreliable behavior and you should follow the documentation. (My code example did not used timed_mutex at first but I updated it)
in C# if I have for example a list I could do
lock (this.mylist)
{
...
}
and with that code I'm sure noone else can use the list before releasing the lock. This is useful in multithreaded applications. How can I do the same thing on Qt? I read docs about QMutex and QReadWriteLock but I don't understand how to use them on a specific object.
To use QMutex (or any standard synchronization method in C/C++) all critical sections which rely on each other must know about the mutex. The simplest (yet, not best practice in C++ i.e. make it a class member or something) way to ensure this, is to create a global variable mutex (which we will do for example).
So consider the following
QMutex mutex;
void someMethod()
{
mutex.lock();
// Critical section
mutex.unlock();
}
Now, lock and unlock are atomic methods, so only one thread will be able to enter the critical section at any given time. The key is that both are trying to access the same mutex.
So in essence, this works the same way as C# except you need to manage your mutex yourself. So the lock(...) { ... } block is replaced by mutex.lock() ... mutex.unlock(). This also implies, however, that anytime you want to access the critical section items (i.e. in your example, this->mylist), you should be using the mutex.
EDIT
Qt has very good documentation. You can read more about QMutex here: http://doc.qt.io/qt-4.8/qmutex.html
The general C++ way to do things like this is using RAII, so you wind up with code like this:
// Inside a function, a block that needs to be locked
{
QMutexLocker lock(&mutex); // locks mutex
// Do stuff
// "QMutexLocker" destructor unlocks the mutex when it goes out of scope
}
I don't know how that translates to Qt, but you could probably write a helper class if there's no native support.
EDIT: Thanks to Cory, you can see that Qt supports this idiom very nicely.
In C++11 you can do this:
#include <thread>
#include <mutex>
std::mutex mut;
void someMethod()
{
std::lock_guard(mut);
// Critical section
}
This is the RAII idiom - the lock guard is an object that will release the lock no matter how someMethod scope is exited (normal return or throw). See C++ concurrency In Action by Anthony Williams. He also has a C++11 thread implementation. The C++11 implementation is much like the boost thread implementation.
I want to implement a array-liked data structure allowing multiple threads to modify/insert items simultaneously. How can I obtain it in regard to performance? I implemented a wrapper class around std::vector and I used critical sections for synchronizing threads. Please have a look at my code below. Each time a thread want to work on the internal data, it may have to wait for other threads. Hence, I think its performance is NOT good. :( Is there any idea?
class parallelArray{
private:
std::vector<int> data;
zLock dataLock; // my predefined class for synchronizing
public:
void insert(int val){
dataLock.lock();
data.push_back(val);
dataLock.unlock();
}
void modify(unsigned int index, int newVal){
dataLock.lock();
data[index]=newVal; // assuming that the index is valid
dataLock.unlock();
}
};
Take a look at shared_mutex in the Boost library. This allows you to have multiple readers, but only one writer
http://www.boost.org/doc/libs/1_47_0/doc/html/thread/synchronization.html#thread.synchronization.mutex_types.shared_mutex
The best way is to use some fast reader-writer lock. You perform shared locking for read-only access and exclusive locking for writable access - this way read-only access is performed simultaneously.
In user-mode Win32 API there are Slim Reader/Writer (SRW) Locks available in Vista and later.
Before Vista you have to implement reader-writer lock functionality yourself that is pretty simple task. You can do it with one critical section, one event and one enum/int value. Though good implementation would require more effort - I would use hand-crafted linked list of local (stack allocated) structures to implement fair waiting queue.
I've been looking for causes for deadlocks and strategies/tools to avoid and detect them.
Another potential cause for deadlocks is to have blocking functions calling other blocking functions in a circular way, so that eventually a call never returns.
Sometimes this is hard to discover, specially in very large projects.
So, are there any tools/libraries/techiques that allow to automate the detection of circular calls in a program?
EDIT:
I code mostly in C and C++ so, if possible, give any information about the topic that is applicable to those languages.
Nevertheless, it seems this topic is scarcely covered in SO, so answers for other languages are ok too. although maybe those deserve a topic of its own if someone finds it relevant
Thanks.
Circular (or recursive) calls that try to acquire the same non-reentrant lock are one of the easiest to debug blocking scenarios: locking is deterministic, and can be easily checked. When the application locks, fire up the debugger and look at the stack trace to understand what locks are held and why.
As to general solutions for the problem of locking... you can look into some libraries that provide mutex ordering, and detect when you are trying to lock on a mutex out of order. This type of solutions might be complex to implement correctly, but once in place it ensures that you cannot enter a deadlock condition, as it forces all processes to obtain the locks in the same order (i.e. if process A holds lock La, and it tries to acquire lock Lb for which the ordering is correct, then it can either succeed or lock, but whichever process is holding lock Lb cannot try to lock La as the ordering constraint would not be met).
If you are on Linux there 2 Valgrind tools for detecting deadlocks and race conditions: Helgrind, DRD. They both complement each other and it's worth to check for thread errors by both of them.
In linux you can use valgrind to detect deadlocks, use --tool=helgrind.
Best way to detect deadlocks (IMO) is to make a test program that calls all the functions in a random order in like 30 different threads 10000s of times.
If you get a deadlock you can use VS2010 "Parallel Stacks" window. Debug->Windows->Parallel Stacks
This window will show you all the stacks, so you can find the methods that are deadlocking.
A simple strategy I use to write thread-safe objects:
A thread safe object should be safe when its public methods are called, so you don't get deadlocks when it is used.
So, the idea is to just lock all the public methods that access the object's data.
Besides that you need to insure that within the class' code you never call a public method. If you need to use one of the public methods, then make that method private, and wrap the private method with a public method that locks and then calls it.
If you want better lock granularity you could just create objects for each part that has its own lock, and lock it like I suggested. Then use encapsulation to combine those classes to the one class.
Example:
class Blah {
MyData data;
Lock lock;
public:
DataItem GetData(int index)
{
ReadLock read(lock);
return LocalGetData(index);
}
DataItem FindData(string key)
{
ReadLock read(lock);
DataItem item;
//find the item, can use LocalGetData() to get the item without deadlocking
return item;
}
void PutData(DataItem item)
{
ReadLock write(lock);
//put item in database
}
private:
DataItem LocalGetData(int index)
{
return data[index];
}
}
You could find a tool that builds a call graph, and check the graph for cycles.
Otherwise, there are a number of strategies for detecting deadlocks or other circularities, but they all depend on having some sort of supporting infrastructure in place.
There are deadlock avoidance strategies, having to do with assigning lock priorities and ordering the locks according to priority. These require code changes and enforcing the standards, though.
I'm looking for something similar to the CopyOnWriteSet in Java, a set that supports add, remove and some type of iterators from multiple threads.
there isn't one that I know of, the closest is in thread building blocks which has concurrent_unordered_map
The STL containers allow concurrent read access from multiple threads as long as you don't aren't doing concurrent modification. Often it isn't necessary to iterate while adding / removing.
The guidance about providing a simple wrapper class is sane, I would start with something like the code snippet below protecting the methods that you really need concurrent access to and then providing 'unsafe' access to the base std::set so folks can opt into the other methods that aren't safe. If necessary you can protect access as well to acquiring iterators and putting them back, but this is tricky (still less so than writing your own lock free set or your own fully synchronized set).
I work on the parallel pattern library so I'm using critical_section from VS2010 beta boost::mutex works great too and the RAII pattern of using a lock_guard is almost necessary regardless of how you choose to do this:
template <class T>
class synchronized_set
{
//boost::mutex is good here too
critical_section cs;
public:
typedef set<T> std_set_type;
set<T> unsafe_set;
bool try_insert(...)
{
//boost has a lock_guard
lock_guard<critical_section> guard(cs);
}
};
Why not just use a shared mutex to protect concurrent access? Be sure to use RAII to lock and unlock the mutex:
{
Mutex::Lock lock(mutex);
// std::set manipulation goes here
}
where Mutex::Lock is a class that locks the mutex in the constructor and unlocks it in the destructor, and mutex is a mutex object that is shared by all threads. Mutex is just a wrapper class that hides whatever specific OS primitive you are using.
I've always thought that concurrency and set behavior are orthogonal concepts, so it's better to have them in separate classes. In my experiences, classes that try to be thread safe themselves aren't very flexible or all that useful.
You don't want internal locking, as your invariants will often require multiple operations on the data structure, and internal locking only prevents the steps happening at the same time, whereas you need to keep the steps from different macro-operations from interleaving.
You can also take a look at ACE library which has all thread safe containers you might ever need.
All I can think of is to use OpenMP for parallelization, derive a set class from std's and put a shell around each critial set operation that declares that operation critical using #pragma omp critical.
Qt's QSet class uses implicit sharing (copy on write semantics) and similar methods with std::set, you can look its implementation, Qt is lgpl.
Thread safety and copy on write semantics are not the same thing. That being said...
If you're really after copy-on-write semantics the Adobe Source Libraries has a copy_on_write template that adds these semantics to whatever you instantiate it with.