I need a fast inter-thread communication mechanism for passing work (void*) from TBB tasks to several workers which are in running/blocking operations.
Currently I'm looking into using pipe()+libevent. Is there a faster and more elegant alternative for use with Intel Threading Building Blocks?
You should be able to just use standard memory with mutex locks since threads share the same memory space. The pipe()+libevent solution seems more fitting for interprocess communication where each process has a different memory space.
Check out Implementing a Thread-Safe Queue using Condition Variables. It uses an STL queue, a mutex, and a condition variable to facilitate inter-thread communication. (I don't know if this is applicable to Intel Threading Building Blocks, but since TBB is not mentioned in the question/title, I assume others will end up here like I did -- looking for an inter-thread communication mechanism that is not IPC. And this article might help them, like it helped me.)
Take a look at the Boost lock free and thread safe queue. Very easy to use and works really well. I've used it with threads running on separate cores polling the queue for work.
http://www.boost.org/doc/libs/1_55_0/doc/html/lockfree.html
Related
I am writing a network service which receives raw packets then converts them and puts them into a queue, there are also a couple of worker threads that take the converted packets from the queue and based on some rules update a hash-map. in order to prevent concurrent update on hash-map from different worker threads I have to use mutex. unfortunately using mutex imposes a big performance hit. I need to find a work around for this.
EDITED:
the converted packets contain a sessio_id, this session_id is used as the hash-map key. Before any insertion or update the session_id is first searched and if there is no session_id found then a new entry is added and this is exactly where i use mutex lock, otherwise if the session_id already exists I just update the existing value and there is no mutex lock used for mere value update. It might be helping to know that I use boost::unordered_map as the underlying hash-map.
below is a psudo code of the logic I use:
if hash.find(session_id) then
hash.update(value)
else
mutex.lock()
hash.insert(value)
mutex.unlock()
end
what is you suggestion?
by the way this is my working environment and tools:
Compiler: C++(gcc)
Thread library: pthread
OS: Ubuntu 14.04
The fastest solution would be to split the data in a way that each thread uses its own data set, so you would not need any locking at all. Maybe you can get there by distributing the messages among the threads based on some key data.
Second best solution would be to have a read-write-spinlock implemented using either C++ 11 atomics or the functions from the C library, see https://gcc.gnu.org/onlinedocs/gcc-4.1.0/gcc/Atomic-Builtins.html
Read-write spinlock typically allow multiple, parallel read accesses, but only one write access (which of course also blocks all read accesses).
There is also a read-write mutex in Linux, but I found it to be slightly slower than a hand-made implementation.
Have you looked into lock-free data structures? You can refer to an interesting paper from Andrei Alexandrescu and Maged Michael, Lock-Free Data Structures with Hazard Pointers. Some implementations using similar ideas can for instance be found on the libcds Github repository.
Although they use locking to some extent, Facebook's folly AtomiHashMap and Intel's TBB also provide high performance concurrent hash-maps.
Of course these approaches will require a bit a extra reading and integration work, but if you have determined that your current locking strategy is the bottleneck, it may well be worth the cost.
Could you help me to understand how to use mutexes in multithread Linux application, where:
during data writing it is need to lock variable on write and read
during data reading from the variable it is need to lock it on write.
So it is possible to read simultaneously, but writing opertion is a single opertaion in the same time. During writing, all other operation should wait before it finishes.
You're asking about something that is a bit higher level than mutexes. A mutex is a simple, low-level device. When you lock a thread with a mutex, the CPU is either executing code in the thread that obtained the lock or it is executing some other process entirely. In other words, the mutex has locked out all other threads that belong to the same (heavyweight) process.
You are asking about a read-write lock. Read-write locks use mutexes underneath the hood. The POSIX functions that deal with read-write locks start with pthread_rwlock_. Since you are on a Linux machine, just type man pthread and look for the section marked "READ/WRITE LOCK ROUTINES".
You need a reader/writer lock to allow multiple readers/single writer.
Boost.Thread has one of these (boost::shared_mutex), if you have no other preferred threading library. This uses PThreads primitives under the covers, and will probably save you time in wrapping the raw APIs yourself.
I would not recommend implementing this yourself - it's easy to get something that appears to work, but under load either crashes or kills performance or (worst of all) silently modifies your data in a way it should not be, so you get bad results.
A simple boost::mutex can also be used here as noted by #Als, but won't allow multiple concurrent reads. That is simpler to implement, and may be sufficient for your needs, depending on your read/write access profile.
You will need to use mutexes, if you have global or static objects which are being accessed(read and written to) from different threads.
The disadvantage would be in comparison to a technique that was specialized to work on threads that are running within the same process. For example, does wait/post cause the whole process to yield, rather than just the executing thread, even though anyone waiting for a post would be within the same process?
The semaphore would be used, for example, to solve a producer/consumer problem in a shared buffer between two threads in the same process.
Are there any reasonable alternatives?
Use Boost.Thread condition variables as shown here. The accompanying article has a good summary of Boost.Thread features.
Using interprocess semaphores will work but it's likely to place a tax on your execution due to use of unnecessarily heavyweight underlying OS locking primitives (named kernel objects in Windows, for example).
Windows provides a number of objects useful for synchronising threads, such as event (with SetEvent and WaitForSingleObject), mutexes and critical sections.
Personally I have always used them, especially critical sections since I'm pretty certain they incur very little overhead unless already locked. However, looking at a number of libraries, such as boost, people then to go to a lot of trouble to implement their own locks using the interlocked methods on Windows.
I can understand why people would write lock-less queues and such, since thats a specialised case, but is there any reason why people choose to implement their own versions of the basic synchronisation objects?
Libraries aren't implementing their own locks. That is pretty much impossible to do without OS support.
What they are doing is simply wrapping the OS-provided locking mechanisms.
Boost does it for a couple of reasons:
They're able to provide a much better designed locking API, taking advantage of C++ features. The Windows API is C only, and not very well-designed C, at that.
They are able to offer a degree of portability. the same Boost API can be used if you run your application on a Linux machine or on Mac. Windows' own API is obviously Windows-specific.
The Windows-provided mechanisms have a glaring disadvantage: They require you to include windows.h, which you may want to avoid for a large number of reasons, not least its extreme macro abuse polluting the global namespace.
One particular reason I can think of is portability. Windows locks are just fine on their own but they are not portable to other platforms. A library which wishes to be portable must implement their own lock to guarantee the same semantics across platforms.
In many libraries (aka Boost) you need to write corss platform code. So, using WaitForSingleObject and SetEvent are no-go. Also, there common idioms, like Monitors, Conditions that Win32 API misses, (but it can be implemented using these basic primitives)
Some lock-free data structures like atomic counter are very useful; for example: boost::shared_ptr uses them in order to make it thread safe without overhead of critical section, most compilers (not msvc) use atomic counters in order to implement thread safe copy-on-write std::string.
Some things like queues, can be implemented very efficiently in thread safe way without locks at all that may give significant perfomance boost in certain applications.
There may occasionally be good reasons for implementing your own locks that don't use the Windows OS synchronization objects. But doing so is a "sharp stick." It's easy to poke yourself in the foot.
Here's an example: If you know that you are running the same number of threads as there are hardware contexts, and if the latency of waking up one of those threads which is waiting for a lock is very important to you, you might choose a spin lock implemented completely in user space. If the waiting thread is the only thread spinning on the lock, the latency of transferring the lock from the thread that owns it to the waiting thread is just the latency of moving the cache line to the owner thread and back to the waiting thread -- orders of magnitude faster than the latency of signaling a thread with an OS lock under the same circumstances.
But the scenarios where you want to do this is pretty narrow. As soon as you start having more software threads than hardware threads, you'll likely regret it. In that scenario, you could spend entire OS scheduling quanta doing nothing but spinning on your spin lock. And, if you care about power, spinlocks are bad because they prevent the processor from going into a low-power state.
I'm not sure I buy the portability argument. Portable libraries often have an OS portability layer that abstracts the different OS APIs for synchronization. If you're dealing with locks, a pthread_mutex can be made semantically the same as a Windows Mutex or Critical Section under an abstraction layer. There's some exceptions here, but for most people this is true. If you're dealing with Windows Events or POSIX condition variables, well, those are tougher to abstract. (Vista did introduce POSIX-style condition variables, but not many Windows software developers are in a position to require Vista...)
Writing locking code for a library is useful if that library is meant to be cross platform. Users of the library can use the library's locking functionality and not have to care about the underlying platform implementation. Assuming the library has versions for all the platforms being targetted it's one less bit of code that has to be ported.
What is the common theory behind thread communication? I have some primitive idea about how it should work but something doesn't settle well with me. Is there a way of doing it with interrupts?
Really, it's just the same as any concurrency problem: you've got multiple threads of control, and it's indeterminate which statements on which threads get executed when. That means there are a large number of POTENTIAL execution paths through the program, and your program must be correct under all of them.
In general the place where trouble can occur is when state is shared among the threads (aka "lightweight processes" in the old days.) That happens when there are shared memory areas,
To ensure correctness, what you need to do is ensure that these data areas get updated in a way that can't cause errors. To do this, you need to identify "critical sections" of the program, where sequential operation must be guaranteed. Those can be as little as a single instruction or line of code; if the language and architecture ensure that these are atomic, that is, can't be interrupted, then you're golden.
Otherwise, you idnetify that section, and put some kind of guards onto it. The classic way is to use a semaphore, which is an atomic statement that only allows one thread of control past at a time. These were invented by Edsgar Dijkstra, and so have names that come from the Dutch, P and V. When you come to a P, only one thread can proceed; all other threads are queued and waiting until the executing thread comes to the associated V operation.
Because these primitives are a little primitive, and because the Dutch names aren't very intuitive, there have been some ther larger-scale approaches developed.
Per Brinch-Hansen invented the monitor, which is basically just a data structure that has operations which are guaranteed atomic; they can be implemented with semaphores. Monitors are pretty much what Java synchronized statements are based on; they make an object or code block have that particular behavir -- that is, only one thread can be "in" them at a time -- with simpler syntax.
There are other modeals possible. Haskell and Erlang solve the problem by being functional languages that never allow a variable to be modified once it's created; this means they naturally don't need to wory about synchronization. Some new languages, like Clojure, instead have a structure called "transactional memory", which basically means that when there is an assignment, you're guaranteed the assignment is atomic and reversible.
So that's it in a nutshell. To really learn about it, the best places to look at Operating Systems texts, like, eg, Andy Tannenbaum's text.
The two most common mechanisms for thread communication are shared state and message passing.
THe most common way for threads to communicate is via some shared data structure, typically a queue. Some threads put information into the queue while others take it out. The queue must be protected by operating system facilities such as mutexes and semaphores. Interrupts have nothing to do with it.
If you're really interested in a theory of thread communications, you may want to look into formalisms like the pi Calculus.
To communicate between threads, you'll need to use whatever mechanism is supplied by your operating system and/or runtime. Interrupts would be unusually low level, although they might be used implicitly if your threads communicate using sockets or named pipes.
A common pattern would be to implement shared state using a shared memory block, relying on an os-supplied synchronization primitive such as a mutex to spare you from busy-waiting when your read from the block. Remember that if you have threads at all, then you must have some kind of scheduler already (whether it's native from the OS or emulated in your language runtime). So this scheduler can provide synchronization objects and a "sleep" function without necessarily having to rely on hardware support.
Sockets, pipes, and shared memory work between processes too. Sometimes a runtime will give you a lighter-weight way of doing synchronization for threads within the same process. Shared memory is cheaper within a single process. And sometimes your runtime will also give you an atomic message-passing mechanism.