I try to implement blocking queue. the main parts are the following (it's a kind of educational task)
template <typename T>
class Blocking_queue
{
public:
std::queue<T> _queue;
boost::mutex _mutex;
boost::condition_variable _cvar;
void Put(T& object);
T Get();
void Disable()
};
template<typename T>
void Blocking_queue::Put(T& object)
{
boost::mutex::scoped_lock lock(_mutex);
_queue.push(T);
lock.unlock();
_cvar.notify_one();
}
template<typename T>
T Blocking_queue::Get()
{
boost::mutex::scoped_lock lock(_mutex);
while(_queue.empty())
{
_cvar.wait(_mutex);
}
T last_el = _queue.front();
_queue.pop();
return last_el;
}
template<typename T>
void Blocking_queue::Disable()
{
}
And i need to implement a function Disable() "releasing" all waiting threads (as written in the task). The problem is that i don't fully understand what "releasing" in this terms means, and what methods should i apply. So my idea - is the following: when Disable() is called we should call some method for current thread in this place (inside the loop)
while(_queue.empty())
{
//here
_cvar.wait(_mutex);
}
which will release current thread, am i right? Thanks.
"releasing all threads that are waiting" is an operation that is hardly useful. What do you want to do with this operation?
What is useful, is to shutdown the queue, thus every thread waiting on the queue will be unblocked and every thread that is going to call Get() will return immediately. To implement such a behaviour, simply add a shutdown flag to the queue and wait for "not empty or shutdown":
template<typename T>
void Blocking_queue::Disable()
{
boost::mutex::scoped_lock lock(_mutex);
_shutdown = true;
_cvar.notify_all()
}
To indicate that there is no data, to the caller of Get(), you could return a pair with an additional bool or throw a special exception. There is no way to return null, as not for all types T there is a null value.
template<typename T>
std::pair< bool, T > Blocking_queue::Get()
{
boost::mutex::scoped_lock lock(_mutex);
while (_queue.empty() && !_shutdown )
_cvar.wait(_mutex);
if ( _shutdown )
return std::make_pair( false, T() );
T last_el = _queue.front();
_queue.pop();
return std::make_pair( true, last_el );
}
Related
Looks like scoped_lock in c++17 gives the functionality I'm after however I'm presently tied to c++11.
At the moment I'm seeing deadlock issues with guard_lock when we call it with the same mutex more than once. Does scoped_lock protect against multiple calls (i.e. reenterent?)?
Is there a best practice for doing this in c++11 w/ lock_guard?
mutex lockingMutex;
void get(string s)
{
lock_guard<mutex> lock(lockingMutex);
if (isPresent(s))
{
//....
}
}
bool isPresent(string s)
{
bool ret = false;
lock_guard<mutex> lock(lockingMutex);
//....
return ret;
}
To be able to lock the same mutex multiple times one needs to use std::recursive_mutex. Recursive mutex is more expensive than a non-recursive one.
Best practise, though, is to design your code in such a way that a thread does not lock the same mutex multiple times. For example, have you public functions lock the mutex first and then invoke the implementation function that expects the mutex to have been locked already. Implementation functions must not call the public API functions that lock the mutex. E.g.:
class A {
std::mutex m_;
int state_ = 0;
private: // These expect the mutex to have been locked.
void foo_() {
++state_;
}
void bar_() {
this->foo_();
}
public: // Public functions lock the mutex first.
void foo() {
std::lock_guard<std::mutex> lock(m_);
this->foo_();
}
void bar() {
std::lock_guard<std::mutex> lock(m_);
this->bar_();
}
};
Scoped lock does not give the functionality you are looking for.
Scoped lock is just a variardic version of lock guard. It only exists due to some ABI issues with changing lock guard into a variardic template.
To have reentrant mutexes, you need to use a reentrant mutex. But this is both more expensive at runtime, and usually indicates a lack of care in your mutex state. While holding a mutex you should have complete and total understanding of all other synchronization actions you are performing.
Once you have complete understanding of all synchronizations actions you are performing, it is easy to avoid recursively locking.
There are two patterns you can consider here. First, split public locking API from a private non-locking API. Second, split synchronization from implementation.
private:
mutex lockingMutex;
bool isPresent(string s, lock_guard<mutex> const& lock) {
bool ret = false;
//....
return ret;
}
void get(string s, lock_guard<mutex> const& lock) {
if (isPresent(s, lock))
{
//....
}
}
public:
void get(string s) {
return get( std::move(s), lock_guard<mutex>(lockingMutex) );
}
bool isPresent(string s) {
return isPresent( std::move(s), lock_guard<mutex>(lockingMutex) );
}
};
here I use lock_guard<mutex> as "proof we have a lock".
An often better alternative is to write your class as non-thread-safe, then use a wrapper:
template<class T>
struct mutex_guarded {
template<class T0, class...Ts,
std::enable_if_t<!std::is_same<std::decay_t<T0>, mutex_guarded>{}, bool> =true
>
mutex_guarded(T0&&t0, Ts&&...ts):
t( std::forward<T0>(t0), std::forward<Ts>(ts)... )
{}
mutex_guarded()=default;
~mutex_guarded=default;
template<class F>
auto read( F&& f ) const {
auto l = lock();
return f(t);
}
template<class F>
auto write( F&& f ) {
auto l = lock();
return f(t);
}
private:
auto lock() { return std::unique_lock<std::mutex>(m); }
auto lock() const { return std::unique_lock<std::mutex>(m); }
mutable std::mutex m;
T t;
};
now we can use this like this:
mutex_guarded<Foo> foo;
foo.write([&](auto&&foo){ foo.get("hello"); } );
you can write mutex_gaurded, shared_mutex_guarded, not_mutex_guarded or even async_guarded (which returns futures and serializes actions in a worker thread).
So long as the class doesn't leave its own "zone of control" in methods this pattern makes writing mutex-guarded data much easier, and lets you compose related mutex guarded data into one bundle without having to rewrite them.
Is there an standard way to add a std::packaged_task to an existing thread? There's a nontrivial amount of overhead that must happen before the task is run, so I want to do that once, then keep the thread running and waiting for tasks to execute. I want to be able to use futures so I can optionally get the result of the task and catch exceptions.
My pre-C++11 implementation requires my tasks to inherit from an abstract base class with a Run() method (a bit of a pain, can't use lambdas), and having a std::deque collection of those that I add to in the main thread and dequeue from in the worker thread. I have to protect that collection from simultaneous access and provide a signal to the worker thread that there's something to do so it isn't spinning or sleeping. Enqueing something returns a "result" object with a synchronization object to wait for the task to complete, and a result value. It all works well but it's time for an upgrade if there's something better.
Here is a toy thread pool:
template<class T>
struct threaded_queue {
using lock = std::unique_lock<std::mutex>;
void push_back( T t ) {
{
lock l(m);
data.push_back(std::move(t));
}
cv.notify_one();
}
boost::optional<T> pop_front() {
lock l(m);
cv.wait(l, [this]{ return abort || !data.empty(); } );
if (abort) return {};
auto r = std::move(data.back());
data.pop_back();
return std::move(r);
}
void terminate() {
{
lock l(m);
abort = true;
data.clear();
}
cv.notify_all();
}
~threaded_queue()
{
terminate();
}
private:
std::mutex m;
std::deque<T> data;
std::condition_variable cv;
bool abort = false;
};
struct thread_pool {
thread_pool( std::size_t n = 1 ) { start_thread(n); }
thread_pool( thread_pool&& ) = delete;
thread_pool& operator=( thread_pool&& ) = delete;
~thread_pool() = default; // or `{ terminate(); }` if you want to abandon some tasks
template<class F, class R=std::result_of_t<F&()>>
std::future<R> queue_task( F task ) {
std::packaged_task<R()> p(std::move(task));
auto r = p.get_future();
tasks.push_back( std::move(p) );
return r;
}
template<class F, class R=std::result_of_t<F&()>>
std::future<R> run_task( F task ) {
if (threads_active() >= total_threads()) {
start_thread();
}
return queue_task( std::move(task) );
}
void terminate() {
tasks.terminate();
}
std::size_t threads_active() const {
return active;
}
std::size_t total_threads() const {
return threads.size();
}
void clear_threads() {
terminate();
threads.clear();
}
void start_thread( std::size_t n = 1 ) {
while(n-->0) {
threads.push_back(
std::async( std::launch::async,
[this]{
while(auto task = tasks.pop_front()) {
++active;
try{
(*task)();
} catch(...) {
--active;
throw;
}
--active;
}
}
)
);
}
}
private:
std::vector<std::future<void>> threads;
threaded_queue<std::packaged_task<void()>> tasks;
std::atomic<std::size_t> active;
};
copied from another answer of mine.
A thread_pool with 1 thread matches your description pretty much.
The above is only a toy, a real thread pool I'd replace the std::packaged_task<void()> with a move_only_function<void()>, which is all I use it for. (A packaged_task<void()> can hold a packaged_task<R()> amusingly, if inefficiencly).
You will have to reason about shutdown and make a plan. The above code locks up if you try to shut it down without first clearing the threads.
I'm struggling to implement a thread-safe reference-counted queue. The idea is that I have a number of tasks that each maintain a shared_ptr to a task manager that owns the queue. Here is a minimal implementation that should encounter that same issue:
#include <condition_variable>
#include <deque>
#include <functional>
#include <iostream>
#include <memory>
#include <mutex>
#include <thread>
namespace {
class TaskManager;
struct Task {
std::function<void()> f;
std::shared_ptr<TaskManager> manager;
};
class Queue {
public:
Queue()
: _queue()
, _mutex()
, _cv()
, _running(true)
, _thread([this]() { sweepQueue(); })
{
}
~Queue() { close(); }
void close() noexcept
{
try {
{
std::lock_guard<std::mutex> lock(_mutex);
if (!_running) {
return;
}
_running = false;
}
_cv.notify_one();
_thread.join();
} catch (...) {
std::cerr << "An error occurred while closing the queue\n";
}
}
void push(Task&& task)
{
std::unique_lock<std::mutex> lock(_mutex);
_queue.emplace_back(std::move(task));
lock.unlock();
_cv.notify_one();
}
private:
void sweepQueue() noexcept
{
while (true) {
try {
std::unique_lock<std::mutex> lock(_mutex);
_cv.wait(lock, [this] { return !_running || !_queue.empty(); });
if (!_running && _queue.empty()) {
return;
}
if (!_queue.empty()) {
const auto task = _queue.front();
_queue.pop_front();
task.f();
}
} catch (...) {
std::cerr << "An error occurred while sweeping the queue\n";
}
}
}
std::deque<Task> _queue;
std::mutex _mutex;
std::condition_variable _cv;
bool _running;
std::thread _thread;
};
class TaskManager : public std::enable_shared_from_this<TaskManager> {
public:
void addTask(std::function<void()> f)
{
_queue.push({ f, shared_from_this() });
}
private:
Queue _queue;
};
} // anonymous namespace
int main(void)
{
const auto manager = std::make_shared<TaskManager>();
manager->addTask([]() { std::cout << "Hello world\n"; });
}
The problem I find is that on rare occasions, the queue will try to invoke its own destructor within the sweepQueue method. Upon further inspection, it seems that the reference count on the TaskManager hits zero once the last task is dequeued. How can I safely maintain the reference count without invoking the destructor?
Update: The example does not clarify the need for the std::shared_ptr<TaskManager> within Task. Here is an example use case that should illustrate the need for this seemingly unnecessary ownership cycle.
std::unique_ptr<Task> task;
{
const auto manager = std::make_shared<TaskManager>();
task = std::make_unique<Task>(someFunc, manager);
}
// Guarantees manager is not destroyed while task is still in scope.
The ownership hierarchy here is TaskManager owns Queue and Queue owns Tasks. Tasks maintaining a shared pointer to TaskManager create an ownership cycle which does not seem to serve a useful purpose here.
This is the ownership what is root of the problem here. A Queue is owned by TaskManager, so that Queue can have a plain pointer to TaskManager and pass that pointer to Task in sweepQueue. You do not need std::shared_pointer<TaskManager> in Task at all here.
I'd refactor the queue from the thread first.
But to fix your problem:
struct am_I_alive {
explicit operator bool() const { return m_ptr.lock(); }
private:
std::weak_ptr<void> m_ptr;
};
struct lifetime_tracker {
am_I_alive track_lifetime() {
if (!m_ptr) m_ptr = std::make_shared<bool>(true);
return {m_ptr};
}
lifetime_tracker() = default;
lifetime_tracker(lifetime_tracker const&) {} // do nothing, don't copy
lifetime_tracker& operator=(lifetime_tracker const&){ return *this; }
private:
std::shared_ptr<void> m_ptr;
};
this is a little utility to detect if we have been deleted. It is useful in any code that calls an arbitrary callback whose side effect could include delete(this).
Privately inherit your Queue from it.
Then split popping the task from running it.
std::optional<Task> get_task() {
std::unique_lock<std::mutex> lock(_mutex);
_cv.wait(lock, [this] { return !_running || !_queue.empty(); });
if (!_running && _queue.empty()) {
return {}; // end
}
auto task = _queue.front();
_queue.pop_front();
return task;
}
void sweepQueue() noexcept
{
while (true) {
try {
auto task = get_task();
if (!task) return;
// we are alive here
auto alive = track_lifetime();
try {
(*task).f();
} catch(...) {
std::cerr << "An error occurred while running a task\n";
}
task={};
// we could be deleted here
if (!alive)
return; // this was deleted, get out of here
}
} catch (...) {
std::cerr << "An error occurred while sweeping the queue\n";
}
}
}
and now you are safe.
After that you need to deal with the thread problem.
The thread problem is that you need your code to destroy the thread from within the thread it is running. At the same time, you also need to guarantee that the thread has terminated before main ends.
These are not compatible.
To fix that, you need to create a thread owning pool that doesn't have your "keep alive" semantics, and get your thread from there.
These threads don't delete themselves; instead, they return themselves to that pool for reuse by another client.
At shutdown, those threads are blocked on to ensure you don't have code running elsewhere that hasn't halted before the end of main.
To write such a pool without your inverted dependency mess, split the queue part of your code off. This queue owns no thread.
template<class T>
struct threadsafe_queue {
void push(T);
std::optional<T> pop(); // returns empty if thread is aborted
void abort();
~threadsafe_queue();
private:
std::mutex m;
std::condition_variable v;
std::deque<T> data;
bool aborted = false;
};
then a simple thread pool:
struct thread_pool {
template<class F>
std::future<std::result_of_t<F&()>> enqueue( F&& f );
template<class F>
std::future<std::result_of_t<F&()>> thread_off_now( F&& f ); // starts a thread if there aren't any free
void abort();
void start_thread( std::size_t n = 1 );
std::size_t count_threads() const;
~thread_pool();
private:
threadsafe_queue< std::function<void()> > tasks;
std::vector< std::thread > threads;
static void thread_loop( thread_pool* pool );
};
make a thread pool singleton. Get your threads for your queue from thread_off_now method, guaranteeing you a thread that (when you are done with it) can be recycled, and whose lifetime is handled by someone else.
But really, you should instead be thinking with ownership in mind. The idea that tasks and task queues mutually own each other is a mess.
If someone disposes of a task queue, it is probably a good idea to abandon the tasks instead of persisting it magically and silently.
Which is what my simple thread pool does.
A project I'm working on uses multiple threads to do work on a collection of files. Each thread can add files to the list of files to be processed, so I put together (what I thought was) a thread-safe queue. Relevant portions follow:
// qMutex is a std::mutex intended to guard the queue
// populatedNotifier is a std::condition_variable intended to
// notify waiting threads of a new item in the queue
void FileQueue::enqueue(std::string&& filename)
{
std::lock_guard<std::mutex> lock(qMutex);
q.push(std::move(filename));
// Notify anyone waiting for additional files that more have arrived
populatedNotifier.notify_one();
}
std::string FileQueue::dequeue(const std::chrono::milliseconds& timeout)
{
std::unique_lock<std::mutex> lock(qMutex);
if (q.empty()) {
if (populatedNotifier.wait_for(lock, timeout) == std::cv_status::no_timeout) {
std::string ret = q.front();
q.pop();
return ret;
}
else {
return std::string();
}
}
else {
std::string ret = q.front();
q.pop();
return ret;
}
}
However, I am occasionally segfaulting inside the if (...wait_for(lock, timeout) == std::cv_status::no_timeout) { } block, and inspection in gdb indicates that the segfaults are occurring because the queue is empty. How is this possible? It was my understanding that wait_for only returns cv_status::no_timeout when it has been notified, and this should only happen after FileQueue::enqueue has just pushed a new item to the queue.
It is best to make the condition (monitored by your condition variable) the inverse condition of a while-loop:
while(!some_condition). Inside this loop, you go to sleep if your condition fails, triggering the body of the loop.
This way, if your thread is awoken--possibly spuriously--your loop will still check the condition before proceeding. Think of the condition as the state of interest, and think of the condition variable as more of a signal from the system that this state might be ready. The loop will do the heavy lifting of actually confirming that it's true, and going to sleep if it's not.
I just wrote a template for an async queue, hope this helps. Here, q.empty() is the inverse condition of what we want: for the queue to have something in it. So it serves as the check for the while loop.
#ifndef SAFE_QUEUE
#define SAFE_QUEUE
#include <queue>
#include <mutex>
#include <condition_variable>
// A threadsafe-queue.
template <class T>
class SafeQueue
{
public:
SafeQueue(void)
: q()
, m()
, c()
{}
~SafeQueue(void)
{}
// Add an element to the queue.
void enqueue(T t)
{
std::lock_guard<std::mutex> lock(m);
q.push(t);
c.notify_one();
}
// Get the "front"-element.
// If the queue is empty, wait till a element is avaiable.
T dequeue(void)
{
std::unique_lock<std::mutex> lock(m);
while(q.empty())
{
// release lock as long as the wait and reaquire it afterwards.
c.wait(lock);
}
T val = q.front();
q.pop();
return val;
}
private:
std::queue<T> q;
mutable std::mutex m;
std::condition_variable c;
};
#endif
According to the standard condition_variables are allowed to wakeup spuriously, even if the event hasn't occured. In case of a spurious wakeup it will return cv_status::no_timeout (since it woke up instead of timing out), even though it hasn't been notified. The correct solution for this is of course to check if the wakeup was actually legit before proceding.
The details are specified in the standard §30.5.1 [thread.condition.condvar]:
—The function will unblock when signaled by a call to notify_one(), a call to notify_all(), expiration of the absolute timeout (30.2.4) specified by abs_time, or spuriously.
...
Returns: cv_status::timeout if the absolute timeout (30.2.4) specifiedby abs_time expired, other-ise cv_status::no_timeout.
This is probably how you should do it:
void push(std::string&& filename)
{
{
std::lock_guard<std::mutex> lock(qMutex);
q.push(std::move(filename));
}
populatedNotifier.notify_one();
}
bool try_pop(std::string& filename, std::chrono::milliseconds timeout)
{
std::unique_lock<std::mutex> lock(qMutex);
if(!populatedNotifier.wait_for(lock, timeout, [this] { return !q.empty(); }))
return false;
filename = std::move(q.front());
q.pop();
return true;
}
Adding to the accepted answer, I would say that implementing a correct multi producers / multi consumers queue is difficult (easier since C++11, though)
I would suggest you to try the (very good) lock free boost library, the "queue" structure will do what you want, with wait-free/lock-free guarantees and without the need for a C++11 compiler.
I am adding this answer now because the lock-free library is quite new to boost (since 1.53 I believe)
I would rewrite your dequeue function as:
std::string FileQueue::dequeue(const std::chrono::milliseconds& timeout)
{
std::unique_lock<std::mutex> lock(qMutex);
while(q.empty()) {
if (populatedNotifier.wait_for(lock, timeout) == std::cv_status::timeout )
return std::string();
}
std::string ret = q.front();
q.pop();
return ret;
}
It is shorter and does not have duplicate code like your did. Only issue it may wait longer that timeout. To prevent that you would need to remember start time before loop, check for timeout and adjust wait time accordingly. Or specify absolute time on wait condition.
There is also GLib solution for this case, I did not try it yet, but I believe it is a good solution.
https://developer.gnome.org/glib/2.36/glib-Asynchronous-Queues.html#g-async-queue-new
BlockingCollection is a C++11 thread safe collection class that provides support for queue, stack and priority containers. It handles the "empty" queue scenario you described. As well as a "full" queue.
You may like lfqueue, https://github.com/Taymindis/lfqueue.
It’s lock free concurrent queue. I’m currently using it to consuming the queue from multiple incoming calls and works like a charm.
This is my implementation of a thread-queue in C++20:
#pragma once
#include <deque>
#include <mutex>
#include <condition_variable>
#include <utility>
#include <concepts>
#include <list>
template<typename QueueType>
concept thread_queue_concept =
std::same_as<QueueType, std::deque<typename QueueType::value_type, typename QueueType::allocator_type>>
|| std::same_as<QueueType, std::list<typename QueueType::value_type, typename QueueType::allocator_type>>;
template<typename QueueType>
requires thread_queue_concept<QueueType>
struct thread_queue
{
using value_type = typename QueueType::value_type;
thread_queue();
explicit thread_queue( typename QueueType::allocator_type const &alloc );
thread_queue( thread_queue &&other );
thread_queue &operator =( thread_queue const &other );
thread_queue &operator =( thread_queue &&other );
bool empty() const;
std::size_t size() const;
void shrink_to_fit();
void clear();
template<typename ... Args>
requires std::is_constructible_v<typename QueueType::value_type, Args ...>
void enque( Args &&... args );
template<typename Producer>
requires requires( Producer producer ) { { producer() } -> std::same_as<std::pair<bool, typename QueueType::value_type>>; }
void enqueue_multiple( Producer producer );
template<typename Consumer>
requires requires( Consumer consumer, typename QueueType::value_type value ) { { consumer( std::move( value ) ) } -> std::same_as<bool>; }
void dequeue_multiple( Consumer consumer );
typename QueueType::value_type dequeue();
void swap( thread_queue &other );
private:
mutable std::mutex m_mtx;
mutable std::condition_variable m_cv;
QueueType m_queue;
};
template<typename QueueType>
requires thread_queue_concept<QueueType>
thread_queue<QueueType>::thread_queue()
{
}
template<typename QueueType>
requires thread_queue_concept<QueueType>
thread_queue<QueueType>::thread_queue( typename QueueType::allocator_type const &alloc ) :
m_queue( alloc )
{
}
template<typename QueueType>
requires thread_queue_concept<QueueType>
thread_queue<QueueType>::thread_queue( thread_queue &&other )
{
using namespace std;
lock_guard lock( other.m_mtx );
m_queue = move( other.m_queue );
}
template<typename QueueType>
requires thread_queue_concept<QueueType>
thread_queue<QueueType> &thread_queue<QueueType>::thread_queue::operator =( thread_queue const &other )
{
std::lock_guard
ourLock( m_mtx ),
otherLock( other.m_mtx );
m_queue = other.m_queue;
return *this;
}
template<typename QueueType>
requires thread_queue_concept<QueueType>
thread_queue<QueueType> &thread_queue<QueueType>::thread_queue::operator =( thread_queue &&other )
{
using namespace std;
lock_guard
ourLock( m_mtx ),
otherLock( other.m_mtx );
m_queue = move( other.m_queue );
return *this;
}
template<typename QueueType>
requires thread_queue_concept<QueueType>
bool thread_queue<QueueType>::thread_queue::empty() const
{
std::lock_guard lock( m_mtx );
return m_queue.empty();
}
template<typename QueueType>
requires thread_queue_concept<QueueType>
std::size_t thread_queue<QueueType>::thread_queue::size() const
{
std::lock_guard lock( m_mtx );
return m_queue.size();
}
template<typename QueueType>
requires thread_queue_concept<QueueType>
void thread_queue<QueueType>::thread_queue::shrink_to_fit()
{
std::lock_guard lock( m_mtx );
return m_queue.shrink_to_fit();
}
template<typename QueueType>
requires thread_queue_concept<QueueType>
void thread_queue<QueueType>::thread_queue::clear()
{
std::lock_guard lock( m_mtx );
m_queue.clear();
}
template<typename QueueType>
requires thread_queue_concept<QueueType>
template<typename ... Args>
requires std::is_constructible_v<typename QueueType::value_type, Args ...>
void thread_queue<QueueType>::thread_queue::enque( Args &&... args )
{
using namespace std;
unique_lock lock( m_mtx );
m_queue.emplace_front( forward<Args>( args ) ... );
m_cv.notify_one();
}
template<typename QueueType>
requires thread_queue_concept<QueueType>
typename QueueType::value_type thread_queue<QueueType>::thread_queue::dequeue()
{
using namespace std;
unique_lock lock( m_mtx );
while( m_queue.empty() )
m_cv.wait( lock );
value_type value = move( m_queue.back() );
m_queue.pop_back();
return value;
}
template<typename QueueType>
requires thread_queue_concept<QueueType>
template<typename Producer>
requires requires( Producer producer ) { { producer() } -> std::same_as<std::pair<bool, typename QueueType::value_type>>; }
void thread_queue<QueueType>::enqueue_multiple( Producer producer )
{
using namespace std;
lock_guard lock( m_mtx );
for( std::pair<bool, value_type> ret; (ret = move( producer() )).first; )
m_queue.emplace_front( move( ret.second ) ),
m_cv.notify_one();
}
template<typename QueueType>
requires thread_queue_concept<QueueType>
template<typename Consumer>
requires requires( Consumer consumer, typename QueueType::value_type value ) { { consumer( std::move( value ) ) } -> std::same_as<bool>; }
void thread_queue<QueueType>::dequeue_multiple( Consumer consumer )
{
using namespace std;
unique_lock lock( m_mtx );
for( ; ; )
{
while( m_queue.empty() )
m_cv.wait( lock );
try
{
bool cont = consumer( move( m_queue.back() ) );
m_queue.pop_back();
if( !cont )
return;
}
catch( ... )
{
m_queue.pop_back();
throw;
}
}
}
template<typename QueueType>
requires thread_queue_concept<QueueType>
void thread_queue<QueueType>::thread_queue::swap( thread_queue &other )
{
std::lock_guard
ourLock( m_mtx ),
otherLock( other.m_mtx );
m_queue.swap( other.m_queue );
}
The only template-parameter is BaseType, which can be a std::deque type or std::list type, restricted with thread_queue_concept. This class uses this type as the internal queue type. Chose that BaseType that is most efficient for your application. I might have restricted the class on a more differentiated thread_queue_concepts that checks for all the used parts of BaseType so that this class might apply for other types compatible to std::list<> or std::deque<> but I was too lazy to implement that for the unlikely case that someone implements something like that on his own.
One advantage of this code are enqueue_multiple and dequeue_multiple. These functions are given a function-object, usually a lambda, which can enqueue or dequeue multiple items with only one locking step. For enqueue this always holds true, for dequeue this depends on if the queue has elements to fetch or not.
enqueue_multiple usually makes sense if you have one producer and multiple consumers. It results in longer periods holding the lock and therefore it makes sense only if the items can be produced or move fast.
dequeue_multiple usually makes sense if you have multiple producers and one consumer. Here we also have longer locking periods, but as objects are usually only have fast moves here, this normally doesn't hurt.
If the consumer function object of the dequeue_multiple throws an exception while consuming, the exception is caugt and the element provided to the consumer (rvalue-refernce inside the underlying queue types object) is removed.
If you like to use this class with C++11 you have to remove the concepts or disable them with #if defined(__cpp_concepts).
I'm working on a server-side project, which is supposed to accept more than 100 client connections.
It's multithreaded program using boost::thread. Some places I'm using boost::lock_guard<boost::mutex> to lock the shared member data. There is also a BlockingQueue<ConnectionPtr> which contains the input connections. The implementation of the BlockingQueue:
template <typename DataType>
class BlockingQueue : private boost::noncopyable
{
public:
BlockingQueue()
: nblocked(0), stopped(false)
{
}
~BlockingQueue()
{
Stop(true);
}
void Push(const DataType& item)
{
boost::mutex::scoped_lock lock(mutex);
queue.push(item);
lock.unlock();
cond.notify_one(); // cond.notify_all();
}
bool Empty() const
{
boost::mutex::scoped_lock lock(mutex);
return queue.empty();
}
std::size_t Count() const
{
boost::mutex::scoped_lock lock(mutex);
return queue.size();
}
bool TryPop(DataType& poppedItem)
{
boost::mutex::scoped_lock lock(mutex);
if (queue.empty())
return false;
poppedItem = queue.front();
queue.pop();
return true;
}
DataType WaitPop()
{
boost::mutex::scoped_lock lock(mutex);
++nblocked;
while (!stopped && queue.empty()) // Or: if (queue.empty())
cond.wait(lock);
--nblocked;
if (stopped)
{
cond.notify_all(); // Tell Stop() that this thread has left
BOOST_THROW_EXCEPTION(BlockingQueueTerminatedException());
}
DataType tmp(queue.front());
queue.pop();
return tmp;
}
void Stop(bool wait)
{
boost::mutex::scoped_lock lock(mutex);
stopped = true;
cond.notify_all();
if (wait) // Wait till all blocked threads on the waiting queue to leave BlockingQueue::WaitPop()
{
while (nblocked)
cond.wait(lock);
}
}
private:
std::queue<DataType> queue;
mutable boost::mutex mutex;
boost::condition_variable_any cond;
unsigned int nblocked;
bool stopped;
};
For each Connection, there is a ConcurrentQueue<StreamPtr>, which contains the input Streams. The implementation of the ConcurrentQueue:
template <typename DataType>
class ConcurrentQueue : private boost::noncopyable
{
public:
void Push(const DataType& item)
{
boost::mutex::scoped_lock lock(mutex);
queue.push(item);
}
bool Empty() const
{
boost::mutex::scoped_lock lock(mutex);
return queue.empty();
}
bool TryPop(DataType& poppedItem)
{
boost::mutex::scoped_lock lock(mutex);
if (queue.empty())
return false;
poppedItem = queue.front();
queue.pop();
return true;
}
private:
std::queue<DataType> queue;
mutable boost::mutex mutex;
};
When debugging the program, it's okay. But in a load testing with 50 or 100 or more client connections, sometimes it aborted with
pthread_mutex_lock.c:321: __pthread_mutex_lock_full: Assertion `robust || (oldval & 0x40000000) == 0' failed.
I have no idea what happened, and it cannot be reproduced every time.
I googled a lot, but no luck. Please advise.
Thanks.
Peter
0x40000000 is FUTEX_OWNER_DIED - which has the following docs in the futex.h header:
/*
* The kernel signals via this bit that a thread holding a futex
* has exited without unlocking the futex. The kernel also does
* a FUTEX_WAKE on such futexes, after setting the bit, to wake
* up any possible waiters:
*/
#define FUTEX_OWNER_DIED 0x40000000
So the assertion seems to be an indication that a thread that's holding the lock is exiting for some reason - is there a way tha a thread object might be destroyed while it's holding a lock?
Another thing to check is if you have some sort of memory corruption somewhere. Valgrind might be a tool that can help you with that.
I had a similar issue and found this post. It may be useful for some of you: in my case I was just missing the init.
pthread_mutex_init(&_mutexChangeMapEvent, NULL);