I'm using Boost::interprocess::message_queue to enable communication between threads on my application. I'm doing so for two reasons. First, because I don't need to directly implement a shared mem. synchronization mechanism and second because I want to model the system this way because in the future it may change to interprocess.
My question is: Is there any more apropriate mechanism to enable interthread communication given this restrictions or I can continue using interprocess queue without fear for 'interprocess overhead'?
You could use a std::queue protected by a boost::mutex & boost::condition_variable
Anthony Williams provides an excellent explanation on how to implement a thread safe queue in his book 'C++ Concurrency in Action'.
Example code is available on his website here:
Just Software Solutions - Implementing a Thread Safe Queue
Related
This question refers to boost::threadpool::pool, and there's docs about it here on sourceforge, but I can't find it in the boost docs.
Why is it called boost if it's not on boost?
PS: I know how to use Boost::ASIO's io_service to create a thread pool, but I'd like to understand what this boost::threadpool is.
As a former helper with Boost.Thread maintenance, I was often asked why Boost.Thread doesn't provide a thread pool. The simple answer is that it really is too easy to roll your own, for example here is a perfectly fine threadpool implementation in only a few lines of C++.
It's too small a thing for Boost, and too much bike shedding would happen on trying to submit a general purpose thread pool. So you can misuse ASIO to implement a thread pool (also easy), roll your own, or just use the thread pool in the C++ 11 standard library accessible via std::async.
how can I make a queue thread safe? I need to push / pop / front / back and clear. is there something similar in boost?
I have one producer and one or more consumer.
std::queue is not thread safe if one or more threads are writing. And its interface is not conducive to a thread safe implementation, because it has separate methods such as pop(), size() and empty() which would have to be synchronized externally.
A common approach* is to implement a queue type with a simpler interface, and use locking mechanisms internally to provide synchronization.
* A search for "concurrent queue C++" should yield many results. I implemented a very simple toy one here, where the limitation was to use only standard C++. See also Anthony Williams' book C++ concurrency in action, as well as his blog.
You must protect access to std::queue. If you are using boost protect it using boost::mutex. Now if you have multiple readers and one writer thread look at boost::shared_lock (for readers) and boost::unique_lock(for writer).
However if you will encounter writer thread starvation look at boost::shared_mutex.
in boost 1.53 there is a lockfee queue http://www.boost.org/doc/libs/1_53_0/doc/html/boost/lockfree/queue.html, no mutex or smth like this.
You have to protect it, e.g. with a std::mutex, on every operation. Boost would be an alternative if you don't have C++11 yet.
I have game and I have two threads , one generates custom class and needs to store that (I put to push that in queue but I am not sure if that is thread safe, first thread generates every 50ms new instance, and second can read faster if there is any or slower - speed changes over time) . Another thread uses if queue is not empty , pop first and calculates some things. Is there any data structure thread safe for this problem in stl or boost ?
Using std::queue or any similar container will not be thread safe. If you want your access (push/pop) to be thread-safe, while using std::queue, you should use boost::mutex or a similar mechanism to lock before each access. You can look at boost::shared_mutex if you need immutable reads from more than one thread (not sure you need that based on what you described).
Apart from that, you can take a look at boost::interprocess::message_queue, as someone has already mentioned -> http://www.boost.org/doc/libs/1_50_0/boost/interprocess/ipc/message_queue.hpp for the most recent version of boost.
Moreover, there is the concept of lock-free queues en.wikipedia.org/wiki/Non-blocking_algorithm. I cannot provide an example of such implementation but I am sure you can find some if you google around.
i am newbie in C++ and boost.
As part of my master thesis, i wrote a program which simulate a statistical model. During the computation, i use boost::thread to process my "center of mass vector", for saving some computation time. So far so good.
Now, i would like to take each result from the boost::thread (each time one element) and pass it to a running thread, which is going to preform recursive regression.
My questions:
how can i pass my new computed element to the existing thread?
how could i "wake-up" the thread, when i pass the new element?
i would be happy if someone could point me to an existing example.
the simplest possible way is to use std::queue, boost::mutex and boost::conditional_variable. wrap any access to queue by mutex, after pushing to queue call conditional_variable.notify_one(). in consumer thread wait on conditional_variable until any result is ready, then process it.
A proven way to control a thread from another thread is to send messages via a combination of a queue with a conditional variable. Unfortunately, boost::thread doesn't provide a standard solution and there are a couple of tricky things when implementing (possible deadlocks, behaviour when queue is full, use polymorphic messages...)
You should use mutex and/ro semaphore to synchronize your threads and lock variable to achieve thread-safe communication. Just note that all threads in your process share the same memory so you can access the same data, but you have to do it in a thread-safe way.
I'm not sure if boost library implements any threading primitives, but here is a good tutorial about multi-threading programming using POSIX threads - http://www.yolinux.com/TUTORIALS/LinuxTutorialPosixThreads.html
I need a fast inter-thread communication mechanism for passing work (void*) from TBB tasks to several workers which are in running/blocking operations.
Currently I'm looking into using pipe()+libevent. Is there a faster and more elegant alternative for use with Intel Threading Building Blocks?
You should be able to just use standard memory with mutex locks since threads share the same memory space. The pipe()+libevent solution seems more fitting for interprocess communication where each process has a different memory space.
Check out Implementing a Thread-Safe Queue using Condition Variables. It uses an STL queue, a mutex, and a condition variable to facilitate inter-thread communication. (I don't know if this is applicable to Intel Threading Building Blocks, but since TBB is not mentioned in the question/title, I assume others will end up here like I did -- looking for an inter-thread communication mechanism that is not IPC. And this article might help them, like it helped me.)
Take a look at the Boost lock free and thread safe queue. Very easy to use and works really well. I've used it with threads running on separate cores polling the queue for work.
http://www.boost.org/doc/libs/1_55_0/doc/html/lockfree.html