Am I using this deque in a thread safe manner? - c++

I'm trying to understand multi threading in C++. In the following bit of code, will the deque 'tempData' declared in retrieve() always have every element processed once and only once, or could there be multiple copies of tempData across multiple threads with stale data, causing some elements to be processed multiple times? I'm not sure if passing by reference actually causes there to be only one copy in this case?
static mutex m;
void AudioAnalyzer::analysisThread(deque<shared_ptr<AudioAnalysis>>& aq)
{
while (true)
{
m.lock();
if (aq.empty())
{
m.unlock();
break;
}
auto aa = aq.front();
aq.pop_front();
m.unlock();
if (false) //testing
{
retrieveFromDb(aa);
}
else
{
analyzeAudio(aa);
}
}
}
void AudioAnalyzer::retrieve()
{
deque<shared_ptr<AudioAnalysis>>tempData(data);
vector<future<void>> futures;
for (int i = 0; i < NUM_THREADS; ++i)
{
futures.push_back(async(bind(&AudioAnalyzer::analysisThread, this, _1), ref(tempData)));
}
for (auto& f : futures)
{
f.get();
}
}

Looks OK to me.
Threads have shared memory and if the reference to tempData turns up as a pointer in the thread then every thread sees exactly the same pointer value and the same single copy of tempData. [You can check that if you like with a bit of global code or some logging.]
Then the mutex ensures single-threaded access, at least in the threads.
One problem: somewhere there must be a push onto the deque, and that may need to be locked by the mutex as well. [Obviously the push_back onto the futures queue is just local.]

Related

About shared_mutex and shared_ptr across multiple threads

I implemented code such that multiple instances running on different threads reads other instances' data using reader-writer lock and shared_ptr. It seemed fine, but I am not 100% sure about that and I came up with some questions about usage of those.
Detail
I have multiple instances of a class called Chunk and each instance does some calculations in a dedicated thread. A chunk needs to read neighbour chunks' data as well as its own data, but it doesn't write neighbours' data, so reader-writer lock is used. Also, neighbours can be set at runtime. For example, I might want o set a different neighbour chunk at runtime, sometimes just nullptr. It is possible to delete a chunk at runtime, too. Raw pointers can be used but I thought shared_ptr and weak_ptr are better for this, in order to keep track of the lifetime. Own data in shared_ptr and neighbours' data in weak_ptr.
I provided a simpler version of my code below. ChunkData has data and a mutex for it. I use InitData for data initialization and DoWork function is called in a dedicated thread after that. other functions can be called from main thread.
This seems to work, but I am not so confident. Especially, about use of shared_ptr across multiple threads.
What happens if a thread calls shared_ptr's reset() (in ctor and InitData) and other uses it with weak_ptr's lock (in DoWork)? Does this need a lock dataMutex or chunkMutex?
How about copy(in SetNeighbour)? Do I need locks for this as well?
I think other parts are ok, but please let me know if you find anything dangerous. Appreciate that.
By the way, I considered about storing shared_ptr of Chunk instead of ChunkData, but decided not to use this method because internal code, which I don't manage, has GC system and it can delete a pointer to Chunk when I don't expect it.
class Chunk
{
public:
class ChunkData
{
public:
shared_mutex dataMutex; // mutex to read/write data
int* data;
int size;
ChunkData() : data(nullptr) { }
~ChunkData()
{
if (data)
{
delete[] data;
data = nullptr;
}
}
};
private:
mutex chunkMutex; // mutex to read/write member variables
shared_ptr<ChunkData> chunkData;
weak_ptr<ChunkData> neighbourChunkData;
string result;
public:
Chunk(string _name)
: chunkData(make_shared<ChunkData>())
{
}
~Chunk()
{
EndProcess();
unique_lock lock(chunkMutex); // is this needed?
chunkData.reset();
}
void InitData(int size)
{
ChunkData* NewData = new ChunkData();
NewData->size = size;
NewData->data = new int[size];
{
unique_lock lock(chunkMutex); // is this needed?
chunkData.reset(NewData);
cout << "init chunk " << name << endl;
}
}
// This is executed in other thread. e.g. thread t(&Chunk::DoWork, this);
void DoWork()
{
lock_guard lock(chunkMutex); // we modify some members such as result(string) reading chunk data, so need this.
if (chunkData)
{
shared_lock readLock(chunkData->dataMutex);
if (chunkData->data)
{
// read chunkData->data[i] and modify some members such as result(string)
for (int i = 0; i < chunkData->size; ++i)
{
// Is this fine, or should I write data result outside of readLock scope?
result += to_string(chunkData->data[i]) + " ";
}
}
}
// does this work?
if (shared_ptr<ChunkData> neighbour = neighbourChunkData.lock())
{
shared_lock readLock(neighbour->dataMutex);
if (neighbour->data)
{
// read neighbour->data[i] and modify some members as above
}
}
}
shared_ptr<ChunkData> GetChunkData()
{
unique_lock lock(chunkMutex);
return chunkData;
}
void SetNeighbour(Chunk* neighbourChunk)
{
if (neighbourChunk)
{
// safe?
shared_ptr<ChunkData> newNeighbourData = neighbourChunk->GetChunkData();
unique_lock lock(chunkMutex); // lock for chunk properties
{
shared_lock readLock(newNeighbourData->dataMutex); // not sure if this is needed.
neighbourChunkData = newNeighbourData;
}
}
}
int GetDataAt(int index)
{
shared_lock readLock(chunkData->dataMutex);
if (chunkData->data && 0 <= index && index < chunkData->size)
{
return chunkData->data[index];
}
return 0;
}
void SetDataAt(int index, int element)
{
unique_lock writeLock(chunkData->dataMutex);
if (chunkData->data && 0 <= index && index < chunkData->size)
{
chunkData->data[index] = element;
}
}
};
Edit 1
I added more detail for DoWork function. Chunk data is read and chunk's member variables are edited in the function.
After Homer512's anwer, I came up with other questions.
A) In DoWork function I write a member variable inside a read lock. Should I only read data in a read lock scope and if I need to modify other data based on read data, do I have to do outside of the read lock? For example, copy the whole array to a local variable in a read lock, and modify other members outside of the read lock using the local.
B) I followed Homer512 and modifed GetDataAt/SetDataAt as below. I do read/write lock chunkData->dataMutex before unlocking chunkMutex. I also do this in DoWork function. Should I instead do locks separately? For example, make a local variable shared_ptr and set chunkData to it in a chunkMutex lock, unlock it, then lastly read/write lock that local variable's dataMutex and read/write data.
int GetDataAt(int index)
{
lock_guard chunkLock(chunkMutex);
shared_lock readLock(chunkData->dataMutex);
if (chunkData->data && 0 <= index && index < chunkData->size)
{
return chunkData->data[index];
}
return 0;
}
void SetDataAt(int index, int element)
{
lock_guard chunkLock(chunkMutex);
unique_lock writeLock(chunkData->dataMutex);
if (chunkData->data && 0 <= index && index < chunkData->size)
{
chunkData->data[index] = element;
}
}
I have several remarks:
~ChunkData: You could change your data member from int* to unique_ptr<int[]> to get the same result without an explicit destructor. Your code is correct though, just less convenient.
~Chunk: I don't think you need a lock or call the reset method. By the time the destructor runs, by definition, no one should have a reference to the Chunk object. So the lock can never be contested. And reset is unnecessary because the shared_ptr destructor will handle that.
InitData: Yes, the lock is needed because InitData can race with DoWork. You could avoid this by moving InitData to the constructor but I assume there are reasons for this division. You could also change the shared_ptr to std::atomic<std::shared_ptr<ChunkData> > to avoid the lock.
It is more efficient to write InitData like this:
void InitData(int size)
{
std::shared_ptr<ChunkData> NewData = std::make_shared<ChunkData>();
NewData->size = size;
NewData->data = new int[size]; // or std::make_unique<int[]>(size)
{
std::lock_guard<std::mutex> lock(chunkMutex);
chunkData.swap(NewData);
}
// deletes old chunkData outside locked region if it was initialized before
}
make_shared avoids an additional memory allocation for the reference counter. This also moves all allocations and deallocations out of the critical section.
DoWork: Your comment "ready chunkData->data[i] and modify some members". You only take a shared_lock but say that you modify members. Well, which is it, reading or writing? Or do you mean to say that you modify Chunk but not ChunkData, with Chunk being protected by its own mutex?
SetNeighbour: You need to lock both your own chunkMutex and the neighbour's. You should not lock both at the same time to avoid the dining philosopher's problem (though std::lock solves this).
void SetNeighbour(Chunk* neighbourChunk)
{
if(! neighbourChunk)
return;
std::shared_ptr<ChunkData> newNeighbourData;
{
std::lock_guard<std::mutex> lock(neighbourChunk->chunkMutex);
newNeighbourData = neighbourChunk->chunkData;
}
std::lock_guard<std::mutex> lock(this->chunkMutex);
this->neighbourChunkData = newNeighbourData;
}
GetDataAt and SetDataAt: You need to lock chunkMutex. Otherwise you might race with InitData. There is no need to use std::lock because the order of locks is never swapped around.
EDIT 1:
DoWork: The line if (shared_ptr<ChunkData> neighbour = neighbourChunkData.lock()) doesn't keep the neighbur alive. Move the variable declaration out of the if to keep the reference.
EDIT: Alternative design proposal
What I'm bothered with is that your DoWork may be unable to proceed if InitData is still running or waiting to run. How do you want to deal with this? I suggest you make it possible to wait until the work can be done. Something like this:
class Chunk
{
std::mutex chunkMutex;
std::shared_ptr<ChunkData> chunkData;
std::weak_ptr<ChunkData> neighbourChunkData;
std::condition_variable chunkSet;
void waitForChunk(std::unique_lock<std::mutex>& lock)
{
while(! chunkData)
chunkSet.wait(lock);
}
public:
// modified version of my code above
void InitData(int size)
{
std::shared_ptr<ChunkData> NewData = std::make_shared<ChunkData>();
NewData->size = size;
NewData->data = new int[size]; // or std::make_unique<int[]>(size)
{
std::lock_guard<std::mutex> lock(chunkMutex);
chunkData.swap(NewData);
}
chunkSet.notify_all();
}
void DoWork()
{
std::unique_lock<std::mutex> ownLock(chunkMutex);
waitForChunk(lock); // blocks until other thread finishes InitData
{
shared_lock readLock(chunkData->dataMutex);
...
}
shared_ptr<ChunkData> neighbour = neighbourChunkData.lock();
if(! neighbour)
return;
shared_lock readLock(neighbour->dataMutex);
...
}
void SetNeighbour(Chunk* neighbourChunk)
{
if(! neighbourChunk)
return;
shared_ptr<ChunkData> newNeighbourData;
{
std::unique_lock<std::mutex> lock(neighbourChunk->chunkMutex);
neighbourChunk->waitForChunk(lock); // wait until neighbor has finished InitData
newNeighbourData = neighbourChunk->chunkData;
}
std::lock_guard<std::mutex> ownLock(this->chunkMutex);
this->neighbourChunkData = std::move(newNeighbourData);
}
};
The downside to this is that you could deadlock if InitData is never called or if it failed with an exception. There are ways around this, like using an std::shared_future which knows that it is valid (set when InitData is scheduled) and whether it failed (records exception of associated promise or packaged_task).

Thread synchronization between data pointed by vectors of std::shared_ptr

I'm pretty new to concurrent programming and I have a specific issue to which I could not find a solution by browsing the internet..
Basically I have this situation (schematic pseudocode):
void fun1(std::vector<std::shared_ptr<SmallObj>>& v) {
for(int i=0; i<v.size(); i++)
.. read and write on *v[i] ..
}
void fun2(std::vector<std::shared_ptr<SmallObj>>& w) {
for(int i=0; i<w.size(); i++)
.. just read on *w[i] ..
}
int main() {
std::vector<std::shared_ptr<SmallObj>> tot;
for(int iter=0; iter<iterMax; iter++) {
for(int nObj=0; nObj<nObjMax; nObj++)
.. create a SmallObj in the heap and store a shared_ptr in tot ..
std::vector<std::shared_ptr<SmallObj>> v, w;
.. copy elements of "tot" in v and w ..
fun1(v);
fun2(w);
}
return 0;
}
What I want to do is operating concurrently spawning two threads to execute fun1 and fun2 but I need to regulate the access to the SmallObjs using some locking mechanism. How can I do it? In the literature I can only find examples of using mutexes to lock the access to a specific object or a portion of code, but not on the same pointed variables by different objects (in this case v and w)..
Thank you very much and sorry for my ignorance on the matter..
I need to regulate the access to the SmallObjs using some locking mechanism. How can I do it?
Use getters and setters for your data members. Use a std::mutex (or a std::recursive_mutex depending on whether recursive locking is needed) data member to guard the accesses, then always lock with a lock guard.
Example (also see the comments in the code):
class SmallObject{
int getID() const{
std::lock_guard<std::mutex> lck(m_mutex);
return ....;
}
void setID(int id){
std::lock_guard<std::mutex> lck(m_mutex);
....;
}
MyType calculate() const{
std::lock_guard<std::mutex> lck(m_mutex);
//HERE is a GOTCHA if `m_mutex` is a `std::mutex`
int k = this->getID(); //Ooopsie... Deadlock
//To do the above, change the decaration of `m_mutex` from
//std::mutex, to std::recursive_mutex
}
private:
..some data
mutable std::mutex m_mutex;
};
The simplest solution is to hold an std::mutex for the whole vector:
#include <mutex>
#include <thread>
#include <vector>
void fun1(std::vector<std::shared_ptr<SmallObj>>& v,std::mutex &mtx) {
for(int i=0; i<v.size(); i++)
//Anything you can do before read/write of *v[i]...
{
std::lock_guard<std::mutex> guard(mtx);
//read-write *v[i]
}
//Anything you can do after read/write of *v[i]...
}
void fun2(std::vector<std::shared_ptr<SmallObj>>& w,std::mutex &mtx) {
for(int i=0; i<w.size(); i++) {
//Anything that can happen before reading *w[i]
{
std::lock_guard<std::mutex> guard(mtx);
//read *w[i]
}
//Anything that can happen after reading *w[i]
}
int main() {
std::mutex mtx;
std::vector<std::shared_ptr<SmallObj>> tot;
for(int iter=0; iter<iterMax; iter++) {
for(int nObj=0; nObj<nObjMax; nObj++)
.. create a SmallObj in the heap and store a shared_ptr in tot ..
std::vector<std::shared_ptr<SmallObj>> v, w;
.. copy elements of "tot" in v and w ..
std::thread t1([&v,&mtx] { fun1(v,mtx); });
std::thread t2([&w,&mtx] { fun2(w,mtx); });
t1.join();
t2.join();
}
return 0;
}
However you will only realistically get any parallelism on the bits done in the before/after blocks in the loop of fun1() and fun2().
You could further increase parallelism by introducing more locks.
For example you can maybe get away with only 2 mutexes which control odd and even elements:
void fun1(std::vector<int>&v,std::mutex& mtx0,std::mutex& mtx1 ){
for(size_t i{0};i<v.size();++i){
{
std::lock_guard<std::mutex> guard(i%2==0?mtx0:mtx1);
//read-write *v[i]
}
}
}
With a similar format for fun2().
You might be able to reduce contention by working from opposite ends of the vectors or using try_lock and moving onto subsequent elements and 'coming back' to the locked element when available.
That can be most significant if the execution of an iteration of one function is much greater than the other and there's some advantage in getting the results from the 'faster' one before the other finishes.
Alternatives:
It's obviously possible to add an std::mutex to each object.
Whether that works / is necessary will depend on what is actually done in the functions fun1 and fun2 as well as how those mutexes are managed.
If it's necessary to lock before either of the loops start there may be in fact no benefit in parallelism because one of fun1() or fun2() will essentially wait for the other to finish and the two will in effect run in series.

Thread pool stuck on wait condition

I'm encountering a stuck in my c++ program using this thread pool class:
class ThreadPool {
unsigned threadCount;
std::vector<std::thread> threads;
std::list<std::function<void(void)> > queue;
std::atomic_int jobs_left;
std::atomic_bool bailout;
std::atomic_bool finished;
std::condition_variable job_available_var;
std::condition_variable wait_var;
std::mutex wait_mutex;
std::mutex queue_mutex;
std::mutex mtx;
void Task() {
while (!bailout) {
next_job()();
--jobs_left;
wait_var.notify_one();
}
}
std::function<void(void)> next_job() {
std::function<void(void)> res;
std::unique_lock<std::mutex> job_lock(queue_mutex);
// Wait for a job if we don't have any.
job_available_var.wait(job_lock, [this]()->bool { return queue.size() || bailout; });
// Get job from the queue
mtx.lock();
if (!bailout) {
res = queue.front();
queue.pop_front();
}else {
// If we're bailing out, 'inject' a job into the queue to keep jobs_left accurate.
res = [] {};
++jobs_left;
}
mtx.unlock();
return res;
}
public:
ThreadPool(int c)
: threadCount(c)
, threads(threadCount)
, jobs_left(0)
, bailout(false)
, finished(false)
{
for (unsigned i = 0; i < threadCount; ++i)
threads[i] = std::move(std::thread([this, i] { this->Task(); }));
}
~ThreadPool() {
JoinAll();
}
void AddJob(std::function<void(void)> job) {
std::lock_guard<std::mutex> lock(queue_mutex);
queue.emplace_back(job);
++jobs_left;
job_available_var.notify_one();
}
void JoinAll(bool WaitForAll = true) {
if (!finished) {
if (WaitForAll) {
WaitAll();
}
// note that we're done, and wake up any thread that's
// waiting for a new job
bailout = true;
job_available_var.notify_all();
for (auto& x : threads)
if (x.joinable())
x.join();
finished = true;
}
}
void WaitAll() {
std::unique_lock<std::mutex> lk(wait_mutex);
if (jobs_left > 0) {
wait_var.wait(lk, [this] { return this->jobs_left == 0; });
}
lk.unlock();
}
};
gdb say (when stopping the blocked execution) that the stuck was in (std::unique_lock&, ThreadPool::WaitAll()::{lambda()#1})+58>
I'm using g++ v5.3.0 with support for c++14 (-std=c++1y)
How can I avoid this problem?
Update
I've edited (rewrote) the class: https://github.com/edoz90/threadpool/blob/master/ThreadPool.h
The issue here is a race condition on your job count. You're using one mutex to protect the queue, and another to protect the count, which is semantically equivalent to the queue size. Clearly the second mutex is redundant (and improperly used), as is the job_count variable itself.
Every method that deals with the queue has to gain exclusive access to it (even JoinAll to read its size), so you should use the same queue_mutex in the three bits of code that tamper with it (JoinAll, AddJob and next_job).
Btw, splitting the code at next_job() is pretty awkward IMO. You would avoid calling a dummy function if you handled the worker thread body in a single function.
EDIT:
As other comments have already stated, you would probably be better off getting your eyes off the code and reconsidering the problem globally for a while.
The only thing you need to protect here is the job queue, so you need only one mutex.
Then there is the problem of waking up the various actors, which requires a condition variable since C++ basically does not give you any other useable synchronization object.
Here again you don't need more than one variable. Terminating the thread pool is equivalent to dequeueing the jobs without executing them, which can be done any which way, be it in the worker threads themselves (skipping execution if the termination flag is set) or in the JoinAll function (clearing the queue after gaining exclusive access).
Last but not least, you might want to invalidate AddJob once someone decided to close the pool, or else you could get stuck in the destructor while someone keeps feeding in new jobs.
I think you need to keep it simple.
you seem to be using a mutex too many. So there's queue_mutex and you use that when you add and process jobs.
Now what's the need for another separate mutex when you are waiting on reading the queue?
Why can't you use just a conditional variable with the same queue_mutex to read the queue in your WaitAll() method?
Update
I would also recommend using a lock_guard instead of the unique_lock in your WaitAll. There really isn't a need to lock the queue_mutex beyond the WaitAll under exceptional conditions. If you exit the WaitAll exceptionally it should be released regardless.
Update2
Ignore my Update above. Since you are using a condition variable you can't use a lock guard in the WaitAll. But if you are using a unique_lock always go with the try_to_lock version especially if you have more than a couple control paths

how to insert vector only once in multiple thread

I have below code snippet.
std::vector<int> g_vec;
void func()
{
//I add double check to avoid thread need lock every time.
if(g_vec.empty())
{
//lock
if(g_vec.empty())
{
//insert items into g_vec
}
//unlock
}
...
}
func will be called by multiple thread, and I want g_vec will be inserted items only once which is a bit similar as singleton instance. And about singleton instance, I found there is a DCLP issue.
Question:
1. My above code snippet is thread safe, is it has DCLP issue?
2. If not thread safe, how to modify it?
Your code has a data race.
The first check outside the lock is not synchronized with the insertion inside the lock. That means, you may end up with one thread reading the vector (through .empty()) while another thread is writing the vector (through .insert()), which is by definition a data race and leads to undefined behavior.
A solution for exactly this kind of problem is given by the standard in form of call_once.
#include<mutex>
std::vector<int> g_vec;
std::once_flag g_flag;
void func()
{
std::call_once(g_flag, [&g_vec](){ g_vec.insert( ... ); });
}
In your example, it could happen that second reentrant thread will find a non empty half initialized vector, that it's something that you won`t want anyway. You should use a flag, and mark it when initialization job is completed. Better a standard one, but a simple static int will do the job as well
std::vector<int> g_vec;
void func()
{
//I add double check to avoid thread need lock every time.
static int called = 0;
if(!called)
{
lock()
if(!called)
{
//insert items into g_vec
called = 1;
}
unlock()
}
...
}

Communication b/w two threads over a common datastructure. Design Issue

I currently have two threads a producer and a consumer. The producer is a static methods that inserts data in a Deque type static container and informs the consumer through boost::condition_variable that an object has been inserted in the deque object . The consumer then reads data from the Deque type and removes it from the container.The two threads communicate using boost::condition_variable
Here is an abstract of what is happening. This is the code for the consumer and producer
//Static Method : This is the producer. Different classes add data to the container using this method
void C::Add_Data(obj a)
{
try
{
int a = MyContainer.size();
UpdateTextBoxA("Current Size is " + a);
UpdateTextBoxB("Running");
MyContainer.push_back(a);
condition_consumer.notify_one(); //This condition is static member
UpdateTextBoxB("Stopped");
}
catch (std::exception& e)
{
std::string err = e.what();
}
}//end method
//Consumer Method - Runs in a separate independent thread
void C::Read_Data()
{
while(true)
{
boost::mutex::scoped_lock lock(mutex_c);
while(MyContainer.size()!=0)
{
try
{
obj a = MyContainer.front();
....
....
....
MyContainer.pop_front();
}
catch (std::exception& e)
{
std::string err = e.what();
}
}
condition_consumer.wait(lock);
}
}//end method
Now the objects being inserted in the Deque type object are very fast about 500 objects a second.While running this I noticed that TextBoxB was always at "Stopped" while I believe it was suppose to toggle between "Running" and "Stoped". Plus very slow. Any suggestions on what I might have not considered and might be doing wrong ?
1) You should do MyContainer.push_back(a); under mutex - otherwise you would get data race, which is undefined behaviour (+ you may need to protect MyContainer.size(); by mutex too, depending on it's type and C++ISO/Compiler version you use).
2) void C::Read_Data() should be:
void C::Read_Data()
{
scoped_lock slock(mutex_c);
while(true) // you may also need some exit condition/mechanism
{
condition_consumer.wait(slock,[&]{return !MyContainer.empty();});
// at this line MyContainer.empty()==false and slock is locked
// so you may pop value from deque
}
}
3) You are mixing logic of concurrent queue with logic of producing/consuming. Instead you may isolate concurrent queue part to stand-alone entity:
LIVE DEMO
// C++98
template<typename T>
class concurrent_queue
{
queue<T> q;
mutable mutex m;
mutable condition_variable c;
public:
void push(const T &t)
{
(lock_guard<mutex>(m)),
q.push(t),
c.notify_one();
}
void pop(T &result)
{
unique_lock<mutex> u(m);
while(q.empty())
c.wait(u);
result = q.front();
q.pop();
}
};
Thanks for your reply. Could you explain the second parameter in the conditional wait statement [&]{return !MyContainer.empty();}
There is second version of condition_variable::wait which takes predicate as second paramter. It basically waits while that predicate is false, helping to "ignore" spurious wake-ups.
[&]{return !MyContainer.empty();} - this is lambda function. It is new feature of C++11 - it allows to define functions "in-place". If you don't have C++11 then just make stand-alone predicate or use one-argument version of wait with manual while loop:
while(MyContainer.empty()) condition_consumer.wait(lock);
One question in your 3rd point you suggested that I should Isolate the entire queue while My adding to the queue method is static and the consumer(queue reader) runs forever in a separate thread. Could you tell me why is that a flaw in my design?
There is no problem with "runs forever" or with static. You can even make static concurrent_queue<T> member - if your design requires that.
Flaw is that multithreaded synchronization is coupled with other kind of work. But when you have concurrent_queue - all synchronization is isolated inside that primitive, and code which produces/consumes data is not polluted with locks and waits:
concurrent_queue<int> c;
thread producer([&]
{
for(int i=0;i!=100;++i)
c.push(i);
});
thread consumer([&]
{
int x;
do{
c.pop(x);
std::cout << x << std::endl;
}while(x!=11);
});
producer.join();
consumer.join();
As you can see, there is no "manual" synchronization of push/pop, and code is much cleaner.
Moreover, when you decouple your components in such way - you may test them in isolation. Also, they are becoming more reusable.