I have a priority queue and the compare function references a value accessed by multiple threads. So it has to be protected by a mutex. Except I don't know when this compare function is ran. Is it ran when I push a value or when I pop a value? Example code below.
#include <iostream>
#include <queue>
#include <mutex>
using namespace std;
int main()
{
int compare = 7;
mutex compare_m;
auto cmp = [&](int a, int b) {return abs(compare - a)>=abs(compare-b);};
priority_queue<int, vector<int>, decltype(cmp)> x(cmp);
mutex x_m;
//in thread
{
scoped_lock m1(x_m);
//do I need this?
scoped_lock m(compare_m);
x.push(6);
}
//in thread
{
scoped_lock m1(x_m);
//do I need this?
scoped_lock m(compare_m);
x.pop();
}
}
To answer the question, if it is not documented anything can happen (and then we cannot then reason about when comparator is invoked).
If we take a look into cppreference, push is defined in terms of push_heap, which then reorganizes the elements into a heap. Given it then needs to reorganize, we can reason that it invokes the comparator. A similar situation happens with pop, that invokes pop_heap, which again modifies the underlying heap. So again, invoking comparator.
So the above implies you need a critical section on both (however please notice the comments regarding whether it is actually safe to change the behaviour of comparison function while the pq contains elements).
Related
If I want to edit each element of a vector, I can use for_each() to loop through the elements. The problem now is, how can I separate this task into two threads?
I've tried the way below, declaring a thread with for_each(), but I'm getting errors for that.
For example, I'd like to add 1 to each element of the vector. By using the threads, it seems like I'm missing something that the compiler does not like.
#include <iostream>
#include <algorithm>
#include <vector>
#include <thread>
using namespace std;
int main()
{
std::vector<int> nums; //declare a vector
nums.push_back(1);
nums.push_back(2);
nums.push_back(3);
nums.push_back(4); //push each element to the vector
size_t i = (nums.size()/2); //I want to separate the task into two thread
std::thread t1(std::for_each(nums.begin(),nums.begin()+i,[](int& num){
num++;
}));
std::thread t2(std::for_each(nums.begin()+i,nums.end(),[](int& num){
num++;
}));
t1.join();
t2.join();
return 0;
}
I'm getting these two errors:
Failed to specialize function template 'unknown-type std::invoke(_Callable &&) noexcept(<expr>)'
and
invoke': no matching overloaded function found
If I cannot do the threads in this way, what's the right way?
From C++17 onwards, you don't have to create your own threads in order to parallelise your code. Instead, you can pass the appropriate execution_policy to std::for_each:
std::for_each (std::execution::par_unseq, nums.begin (), nums.end (), [] (int& num) { num++; });
Since there are no data races in your code, this is safe.
Paul's got a better approach, but if you don't have C++17 available, and to explain what went wrong, in
std::thread t1(std::for_each(nums.begin(),nums.begin()+i,[](int& num){
num++;
}));
the
std::for_each(nums.begin(),nums.begin()+i,[](int& num){
num++;
})
is a function call that returns a value almost usable in the thread constructor, making the error message much harder to interpret than it otherwise could have been1. You didn't want to call this function. You wanted the function run in the thread, so you need to pass in the function. And that means you somehow need to provide the parameters to the function to the thread.
The easiest fix that I can see is to insert another lambda that calls for_each with the correct range
std::thread t1([i, &nums]()
{
std::for_each(nums.begin(),
nums.begin()+i,
[](int& num)
{
num++;
});
});
1 Just ran a test with a few different compilers and I'm slightly wrong. The error message if you provide something like int func() that returns a type that clearly cannot be callable is just as messy. Nearly identical in fact.
I am trying to make a thread safe queue in c++ with the help of std::mutex and std::condition_variable.The Code
#include <iostream>
#include<thread>
#include<queue>
#include<atomic>
#include<mutex>
#include<condition_variable>
using namespace std;
template<class T>
class SafeQueue{
public:
queue<T>qu;
mutex mut;
condition_variable cv;
SafeQueue(){}
SafeQueue(queue<T>q):qu(q){}
void push(int val){
unique_lock<mutex>uq(mut);
cv.wait(uq,[&](){return qu.empty();});
qu.push(val);
uq.unlock();
}
bool isEmpty(){
// unique_lock<mutex>uq(mut);
// uq.unlock();
cv.notify_all();
return qu.empty();
}
};
void inc(SafeQueue<int>& sq){
for(int i=0;i<10;i++)
continue;
if(sq.isEmpty())
sq.push(1);
}
void inc1(SafeQueue<int>& sq){
for(int i=0;i<10;i++)
continue;
if(sq.isEmpty())
sq.push(2);
}
int main(){
queue<int>qu;
SafeQueue<int> sq(qu);
thread t1(inc,ref(sq));
thread t2(inc1,ref(sq));
t1.join();
t2.join();
cout<<sq.qu.front();
}
A thread Safe queue is supposed to output 1 at the end,but the output is random either 1 or 2 that means it is not thread safe.Why is this particular program not working?
It doesn't mean the program isn't thread safe. It doesn't mean it's ill-defined and can crash.
It just means your program's logic is not written to add the items to the queue in any particular order.
If you want those two items to be added in a specific order, push both from one thread.
Thread safety doesn't mean your application runs as if it only had one thread.
Your program is working fine.
There are several aspects where your code is flawed:
Whenever you access a shared structure, that must be guarded by a mutex. You have a mutex, but you don't use it in isEmpty(). Document that connection, this is important no not lose track. Also, do the same for the CV, document when it's being signaled.
Concerning isEmpty(), that function is useless anyway. Even if the queue was not empty at one point in time, there is nothing that prevents it from becoming empty the next second.
Re-read the documentation for unique_lock. Your way of using it is more complicated than necessary.
The use of the CV is also odd: Normally, you use it to notify waiters of a change. You signal it unconditionally in a function that only seems to query some state.
Basically my question is: is it safe to call front+pop and push from two thread without synchronization?
I've read about this and never found a clear answer. People are saying you should use mutex, but some are hinting you can use two different mutexes for push and for pop. Is it true?
Does this code has undefined behavior?
std::queue<int> queue;
int pop()
{
int x = queue.front();
queue.pop();
return x;
}
void push(int x)
{
queue.push(x);
}
int main()
{
queue.push(1);
std::thread t1(pop);
std::thread t2(push);
t1.join();
t2.join();
}
I would say it's undefined behavior, but you can design a pop push safe queue, so why isn't std::queue is like that?
No, it's not. The standard containers are not thread-safe -- you can't mutate them from two threads. You will have to use a mutex, or a lock-free queue. The problem is that the std::queue has to be able to work with objects like std::string, which cannot be atomically moved or constructed, and the std::queue also has to support arbitrary sizes.
Most lock-free queues only work with machine-word size types and a fixed maximum size. If you need the flexibility of std::queue and thread-safety, you'll have to manually use a mutex. Adding the mutex to the default implementation would be also extremely wasteful, as now suddenly every application would get thread safety even if it doesn't need it.
Is there a way to ensure that blocked threads get woken up in the same order as they got blocked? I read somewhere that this would be called a "strong lock" but I found no resources on that.
On Mac OS X one can design a FIFO queue that stores all the thread ids of the blocked threads and then use the nifty function pthread_cond_signal_thread_np() to wake up one specific thread - which is obviously non-standard and non-portable.
One way I can think of is to use a similar queue and at the unlock() point send a broadcast() to all threads and have them check which one is the next in line.
But this would induce lots of overhead.
A way around the problem would be to issue packaged_task's to the queue and have it process them in order. But that seems more like a workaround to me than a solution.
Edit:
As pointed out by the comments, this question may sound irrelevant, since there is in principle no guaranteed ordering of locking attempts.
As a clarification:
I have something I call a ConditionLockQueue which is very similar to the NSConditionLock class in the Cocoa library, but it maintains a FIFO queue of blocked threads instead of a more-or-less random pool.
Essentially any thread can "line up" (with or without the requirement of a specific 'condition' - a simple integer value - to be met). The thread is then placed on the queue and blocks until it is the frontmost element in the queue whose condition is met.
This provides a very flexible way of synchronization and I have found it very helpful in my program.
Now what I really would need is a way to wake up a specific thread with a specific id.
But these problems are almost alike.
Its pretty easy to build a lock object that uses numbered tickets to insure that its completely fair (lock is granted in the order threads first tried to acquire it):
#include <mutex>
#include <condition_variable>
class ordered_lock {
std::condition_variable cvar;
std::mutex cvar_lock;
unsigned int next_ticket, counter;
public:
ordered_lock() : next_ticket(0), counter(0) {}
void lock() {
std::unique_lock<std::mutex> acquire(cvar_lock);
unsigned int ticket = next_ticket++;
while (ticket != counter)
cvar.wait(acquire);
}
void unlock() {
std::unique_lock<std::mutex> acquire(cvar_lock);
counter++;
cvar.notify_all();
}
};
edit
To fix Olaf's suggestion:
#include <mutex>
#include <condition_variable>
#include <queue>
class ordered_lock {
std::queue<std::condition_variable *> cvar;
std::mutex cvar_lock;
bool locked;
public:
ordered_lock() : locked(false) {};
void lock() {
std::unique_lock<std::mutex> acquire(cvar_lock);
if (locked) {
std::condition_variable signal;
cvar.emplace(&signal);
signal.wait(acquire);
} else {
locked = true;
}
}
void unlock() {
std::unique_lock<std::mutex> acquire(cvar_lock);
if (cvar.empty()) {
locked = false;
} else {
cvar.front()->notify_one();
cvar.pop();
}
}
};
I tried Chris Dodd solution
https://stackoverflow.com/a/14792685/4834897
but the compiler returned errors because queues allows only standard containers that are capable.
while references (&) are not copyable as you can see in the following answer by Akira Takahashi :
https://stackoverflow.com/a/10475855/4834897
so I corrected the solution using reference_wrapper which allows copyable references.
EDIT: #Parvez Shaikh suggested small alteration to make the code more readable by moving cvar.pop() after signal.wait() in lock() function
#include <mutex>
#include <condition_variable>
#include <queue>
#include <atomic>
#include <vector>
#include <functional> // std::reference_wrapper, std::ref
using namespace std;
class ordered_lock {
queue<reference_wrapper<condition_variable>> cvar;
mutex cvar_lock;
bool locked;
public:
ordered_lock() : locked(false) {}
void lock() {
unique_lock<mutex> acquire(cvar_lock);
if (locked) {
condition_variable signal;
cvar.emplace(std::ref(signal));
signal.wait(acquire);
cvar.pop();
} else {
locked = true;
}
}
void unlock() {
unique_lock<mutex> acquire(cvar_lock);
if (cvar.empty()) {
locked = false;
} else {
cvar.front().get().notify_one();
}
}
};
Another option is to use pointers instead of references, but it seems less safe.
Are we asking the right questions on this thread??? And if so: are they answered correctly???
Or put another way:
Have I completely misunderstood stuff here??
Edit Paragraph: It seems StatementOnOrder (see below) is false. See link1 (C++ threads etc. under Linux are ofen based on pthreads), and link2 (mentions current scheduling policy as the determining factor) -- Thanks to Cubbi from cppreference (ref). See also link, link, link, link. If the statement is false, then the method of pulling an atomic (!) ticket, as shown in the code below, is probably to be preferred!!
Here goes...
StatementOnOrder: "Multiple threads that run into a locked mutex and thus "go to sleep" in a particular order, will afterwards aquire ownership of the mutex and continue on in the same order."
Question: Is StatementOnOrder true or false ???
void myfunction() {
std::lock_guard<std::mutex> lock(mut);
// do something
// ...
// mutex automatically unlocked when leaving funtion.
}
I'm asking this because all code examples on this page to date, seem to be either:
a) a waste (if StatementOnOrder is true)
or
b) seriously wrong (if StatementOnOrder is false).
So why do a say that they might be "seriously wrong", if StatementOnOrder is false?
The reason is that all code examples think they're being super-smart by utilizing std::condition_variable, but are actually using locks before that, which will (if StatementOnOrder is false) mess up the order!!!
Just search this page for std::unique_lock<std::mutex>, to see the irony.
So if StatementOnOrder is really false, you cannot run into a lock, and then handle tickets and condition_variables stuff after that. Instead, you'll have to do something like this: pull an atomic ticket before running into any lock!!!
Why pull a ticket, before running into a lock? Because here we're assuming StatementOnOrder to be false, so any ordering has to be done before the "evil" lock.
#include <mutex>
#include <thread>
#include <limits>
#include <atomic>
#include <cassert>
#include <condition_variable>
#include <map>
std::mutex mut;
std::atomic<unsigned> num_atomic{std::numeric_limits<decltype(num_atomic.load())>::max()};
unsigned num_next{0};
std::map<unsigned, std::condition_variable> mapp;
void function() {
unsigned next = ++num_atomic; // pull an atomic ticket
decltype(mapp)::iterator it;
std::unique_lock<std::mutex> lock(mut);
if (next != num_next) {
auto it = mapp.emplace(std::piecewise_construct,
std::forward_as_tuple(next),
std::forward_as_tuple()).first;
it->second.wait(lock);
mapp.erase(it);
}
// THE FUNCTION'S INTENDED WORK IS NOW DONE
// ...
// ...
// THE FUNCTION'S INDENDED WORK IS NOW FINISHED
++num_next;
it = mapp.find(num_next); // this is not necessarily mapp.begin(), since wrap_around occurs on the unsigned
if (it != mapp.end()) {
lock.unlock();
it->second.notify_one();
}
}
The above function guarantees that the order is executed according to the atomic ticket that is pulled. (Edit: using boost's intrusive map, an keeping condition_variable on the stack (as a local variable), would be a nice optimization, which can be used here, to reduce free-store usage!)
But the main question is:
Is StatementOnOrder true or false???
(If it is true, then my code example above is a also waste, and we can just use a mutex and be done with it.)
I wish somebody like Anthony Williams would check out this page... ;)
I'm trying to implement what I think is a fairly simple design. I have a bunch of objects, each containing a std::map and there will be multiple processes accessing them. I want to make sure that there is only one insert/erase to each of these maps at a time.
So I've been reading about boost::thread and class member mutexes and using bind to pass to class member which are all new things to me. I started with a simple example from a Dr. Dobbs article and tried modifying that. I was getting all kinds of compiler errors due to my Threaded object having to be noncopyable. After reading up on that, I decided I can avoid the hassle by keeping a pointer to a mutex instead. So now I have code that compiles but results in the following error:
/usr/include/boost/shared_ptr.hpp:419:
T* boost::shared_ptr< <template-parameter-1-1> >::operator->() const
[with T = boost::mutex]: Assertion `px != 0' failed. Abort
Now I'm really stuck and would really appreciate help with the code as well as comments on where I'm going wrong conceptually. I realize there are some answered questions around these issues here already but I guess I'm still missing something.
#include <boost/thread/thread.hpp>
#include <boost/thread/mutex.hpp>
#include <boost/bind.hpp>
#include <boost/shared_ptr.hpp>
#include <iostream>
#include <map>
using namespace std;
class Threaded {
public:
std::map<int,int> _tsMap;
void count(int id) {
for (int i = 0; i < 100; ++i) {
_mx->lock();
//std::cout << id << ": " << i << std::endl;
_tsMap[i] ++;
_mx->unlock();
}
}
private:
boost::shared_ptr<boost::mutex> _mx;
};
int main(int argc, char* argv[]) {
Threaded th;
int i = 1;
boost::thread thrd1(boost::bind(&Threaded::count, &th, 1));
//boost::thread thrd2(boost::bind(&th.count, 2));
thrd1.join();
//thrd2.join();
return 0;
}
It looks like you're missing a constructor in your Threaded class that creates the mutex that _mx is intended to point at. In its current state (assuming you ran this code just as it is), the default constructor for Threaded calls the default constructor for shared_ptr, resulting in a null pointer (which is then dereferenced in your count() function.
You should add a constructor along the following lines:
Threaded::Threaded(int id)
: _mx(new boost::mutex())
, _mID(id)
{
}
Then you could remove the argument from your count function as well.
A mutex is non-copyable for good reasons. Trying to outsmart the compiler by using a pointer to a mutex is a really bad idea. If you succeed, the compiler will fail to notice the problems, but they will still be there and will turn round and bite you at runtime.
There are two solutions
store the mutex in your class as a static
store the mutex outside your class.
There are advantages for both - I prefer the second.
For some more discussion of this, see my answer here mutexes with objects
Conceptually, I think you do have a problem. Copying a std::shared_ptr will just increase its reference count, and the different objects will all use the same underlying mutex - meaning that whenever one of your objects is used, none of the rest of them can be used.
You, on the other hand, need each object to get its own mutex guard which is unrelated to other objects mutex guards.
What you need is to keep the mutex defined in the class private section as it is - but ensure that your copy constructor and copy assignment operator are overloaded to create a new one from scratch - one bearing no relation to the mutex in the object being copied/assigned from.