how can I make a queue thread safe? I need to push / pop / front / back and clear. is there something similar in boost?
I have one producer and one or more consumer.
std::queue is not thread safe if one or more threads are writing. And its interface is not conducive to a thread safe implementation, because it has separate methods such as pop(), size() and empty() which would have to be synchronized externally.
A common approach* is to implement a queue type with a simpler interface, and use locking mechanisms internally to provide synchronization.
* A search for "concurrent queue C++" should yield many results. I implemented a very simple toy one here, where the limitation was to use only standard C++. See also Anthony Williams' book C++ concurrency in action, as well as his blog.
You must protect access to std::queue. If you are using boost protect it using boost::mutex. Now if you have multiple readers and one writer thread look at boost::shared_lock (for readers) and boost::unique_lock(for writer).
However if you will encounter writer thread starvation look at boost::shared_mutex.
in boost 1.53 there is a lockfee queue http://www.boost.org/doc/libs/1_53_0/doc/html/boost/lockfree/queue.html, no mutex or smth like this.
You have to protect it, e.g. with a std::mutex, on every operation. Boost would be an alternative if you don't have C++11 yet.
Related
I came into a situation where I need to lock a resource (a std::queue) between two processing threads.
The first thread needs to push data to std::queue, while the second thread is going to pop that data out of the queue and process it.
I need to make sure both threads will not compete for my std::queue.
As this is my first time using C++ locks, I came into different approaches: std::lock and std::unique_lock, but I donĀ“t know which one to choose...
What is the difference between std::lock and std::unique_lock and how they should be used.
Thanks for helping.
std::lock is an algorithm that locks a collection of lockable objects all at once in a specific way that avoids deadlocks.
std::unique_lock is a class template that wraps a mutex and can be used as a scoped lock guard, similar to std::lock_guard, but more powerful than the latter (it is itself lockable, can be unlocked early and can be moved around).
You probably want neither of those, but instead just use the good old std::lock_guard.
I'm using Boost::interprocess::message_queue to enable communication between threads on my application. I'm doing so for two reasons. First, because I don't need to directly implement a shared mem. synchronization mechanism and second because I want to model the system this way because in the future it may change to interprocess.
My question is: Is there any more apropriate mechanism to enable interthread communication given this restrictions or I can continue using interprocess queue without fear for 'interprocess overhead'?
You could use a std::queue protected by a boost::mutex & boost::condition_variable
Anthony Williams provides an excellent explanation on how to implement a thread safe queue in his book 'C++ Concurrency in Action'.
Example code is available on his website here:
Just Software Solutions - Implementing a Thread Safe Queue
I have game and I have two threads , one generates custom class and needs to store that (I put to push that in queue but I am not sure if that is thread safe, first thread generates every 50ms new instance, and second can read faster if there is any or slower - speed changes over time) . Another thread uses if queue is not empty , pop first and calculates some things. Is there any data structure thread safe for this problem in stl or boost ?
Using std::queue or any similar container will not be thread safe. If you want your access (push/pop) to be thread-safe, while using std::queue, you should use boost::mutex or a similar mechanism to lock before each access. You can look at boost::shared_mutex if you need immutable reads from more than one thread (not sure you need that based on what you described).
Apart from that, you can take a look at boost::interprocess::message_queue, as someone has already mentioned -> http://www.boost.org/doc/libs/1_50_0/boost/interprocess/ipc/message_queue.hpp for the most recent version of boost.
Moreover, there is the concept of lock-free queues en.wikipedia.org/wiki/Non-blocking_algorithm. I cannot provide an example of such implementation but I am sure you can find some if you google around.
i am newbie in C++ and boost.
As part of my master thesis, i wrote a program which simulate a statistical model. During the computation, i use boost::thread to process my "center of mass vector", for saving some computation time. So far so good.
Now, i would like to take each result from the boost::thread (each time one element) and pass it to a running thread, which is going to preform recursive regression.
My questions:
how can i pass my new computed element to the existing thread?
how could i "wake-up" the thread, when i pass the new element?
i would be happy if someone could point me to an existing example.
the simplest possible way is to use std::queue, boost::mutex and boost::conditional_variable. wrap any access to queue by mutex, after pushing to queue call conditional_variable.notify_one(). in consumer thread wait on conditional_variable until any result is ready, then process it.
A proven way to control a thread from another thread is to send messages via a combination of a queue with a conditional variable. Unfortunately, boost::thread doesn't provide a standard solution and there are a couple of tricky things when implementing (possible deadlocks, behaviour when queue is full, use polymorphic messages...)
You should use mutex and/ro semaphore to synchronize your threads and lock variable to achieve thread-safe communication. Just note that all threads in your process share the same memory so you can access the same data, but you have to do it in a thread-safe way.
I'm not sure if boost library implements any threading primitives, but here is a good tutorial about multi-threading programming using POSIX threads - http://www.yolinux.com/TUTORIALS/LinuxTutorialPosixThreads.html
In the concurrency runtime introduced in VS2010, there is a concurrent_queue class. It has a non blocking try_pop() function.
Similar in Intel Thread Building Blocks (TBB), the blocking pop() call was removed when going from version 2.1 to 2.2.
I wonder what the problem is with a blocking call. Why was it removed from TBB? And why is there no blocking concurrent_queue?
I'm in a situation where I need a blocking concurrent queue, and I don't want a busy wait.
Apart from writing a queue myself, is there another possibility in the concurrency runtime?
From a comment from Arch Robison, and it doesn't get much more "horse's mouth" than that (a):
PPL's concurrent_queue has no blocking pop, hence neither does tbb::strict_ppl::concurrent_queue. The blocking pop is available in tbb::concurrent_bounded_queue.
The design argument for omitting blocking pop is that in many cases, the synchronization for blocking is provided outside of the queue, in which case the implementation of blocking inside the queue becomes unnecessary overhead.
On the other hand, the blocking pop of the old tbb::concurrent_queue was popular among users who did not have outside synchronization.
So we split the functionality. Use cases that do not need blocking or boundedness can use the new tbb::concurrent_queue, and use cases that do need it can use tbb::concurrent_bounded_queue.
(a) Arch is the architect of Threading Building Blocks.
If you need a blocking pop without a busy wait, you need a method of signaling. This implies synchronization between pusher and poper and the queue is no longer without (expensive) synchronization primitives. You basically get a normal synchronized queue with a condition variable being used to notify poppers of pushes, which is not in the spirity of the concurrent_* collections.
The question was if there was another option in the Concurrency Runtime that provides blocking queue functionality because concurrent_queue does not and there is one in VS2010.
Arch's comment is of course completely correct, blocking queues and unblocking queues are separate use cases and this is why they are different in VS2010 and in TBB.
In VS2010 you can use the template class unbounded_buffer located in , the appropriate methods are called enqueue and dequeue.
-Rick
There is no situation, from the queue's standpoint, that it should need to block for an insert or remove. The fact that you may need to block and wait for an insert is immaterial.
You can achieve the functionality you desire by using a condition variable, or a counting semaphore, or something along those lines (whatever your specific API provides). Your trouble isn't with blocking/non-blocking; it sounds like a classic producer-consumer.