Deadlock occuring after multiple iterations of queries in threads (multithreading) - c++

I encounter deadlocks while executing the code snippet below as a thread.
void thread_lifecycle(
Queue<std::tuple<int64_t, int64_t, uint8_t>, QUEUE_SIZE>& query,
Queue<std::string, QUEUE_SIZE>& output_queue,
std::vector<Object>& pgs,
bool* pgs_executed, // Initialized to array false-values
std::mutex& pgs_executed_mutex,
std::atomic<uint32_t>& atomic_pgs_finished
){
bool iter_bool = false;
std::tuple<int64_t, int64_t, uint8_t> next_query;
std::string output = "";
int64_t lower, upper;
while(true) {
// Get next query
next_query = query.pop_front();
// Stop Condition reached terminate thread
if (std::get<2>(next_query) == uint8_t(-1)) break;
//Set query params
lower = std::get<0>(next_query);
upper = std::get<1>(next_query);
// Scan bool array
for (uint32_t i = 0; i < pgs.size(); i++){
// first lock for reading
pgs_executed_mutex.lock();
if (pgs_executed[i] == iter_bool) {
pgs_executed[i] = !pgs_executed[i];
// Unlock and execute the query
pgs_executed_mutex.unlock();
output = pgs.at(i).get_result(lower, upper);
// If query yielded a result, then add it to the output
if (output.length() != 0) {
output_queue.push_back(output);
}
// Inform main thread in case of last result
if (++atomic_pgs_finished >= pgs.size()) {
output_queue.push_back("LAST_RESULT_IDENTIFIER");
atomic_pgs_finished.exchange(0);
}
} else {
pgs_executed_mutex.unlock();
continue;
}
}
//finally flip for next query
iter_bool = !iter_bool;
}
}
Explained:
I have a vector of objects containing information which can be queried (similar to as a table in a database). Each thread can access the objects and all of them iterate the vector ONCE to query the objects which have not been queried and return results, if any.
In the next query it goes through the vector again, and so on... I use the bool* array to denote the entries which are currently queried, so that the processes can synchronize and determine which query should be executed next.
If all have been executed, the last thread having possibly the last results will also return an identifier for the main thread in order to inform that all objects have been queried.
My Question:
Regarding the bool* as well as atomic-pgs_finished, can there be a scenario, in-which a deadlock can occur. As far as i can think, i cannot see a deadlock in this snippet. However, executing this and running this for a while results into a deadlock.
I am seriously considering that a bit (byte?) has randomly flipped causing this deadlock (on ECC-RAM), so that 1 or more objects actually were not executed. Is this even possible?
Maybe another implementation could help?
Edit, Implementation of the Queue:
template<class T, size_t MaxQueueSize>
class Queue
{
std::condition_variable consumer_, producer_;
std::mutex mutex_;
using unique_lock = std::unique_lock<std::mutex>;
std::queue<T> queue_;
public:
template<class U>
void push_back(U&& item) {
unique_lock lock(mutex_);
while(MaxQueueSize == queue_.size())
producer_.wait(lock);
queue_.push(std::forward<U>(item));
consumer_.notify_one();
}
T pop_front() {
unique_lock lock(mutex_);
while(queue_.empty())
consumer_.wait(lock);
auto full = MaxQueueSize == queue_.size();
auto item = queue_.front();
queue_.pop();
if(full)
producer_.notify_all();
return item;
}
};

Thanks to #Ulrich Eckhardt (,
#PaulMcKenzie and all the other comments, thank you for the brainstorming!). I probably have found the cause of the deadlock. I tried to reduce this example even more and thought on removing atomic_pgs_finished, a variable indicating whether all pgs have been queried. Interestingly: ++atomic_pgs_finished >= pgs.size() returns not only once but multiple times true, so that multiple threads are in this specific if-clause.
I simply fixed it by using another mutex around this if-clause. Maybe someone can explain why ++atomic_pgs_finished >= pgs.size() is not atomic and causes true for multiple threads.
Below i have updated the code (mostly the same as in the question) with comments, so that it might be more understandable.
void thread_lifecycle(
Queue<std::tuple<int64_t, int64_t, uint8_t>, QUEUE_SIZE>& query, // The input queue containing queries, in my case triples
Queue<std::string, QUEUE_SIZE>& output_queue, // The Output Queue of results
std::vector<Object>& pgs, // Objects which should be queried
bool* pgs_executed, // Initialized to an array of false-values
std::mutex& pgs_executed_mutex, // a mutex, protecting pgs_executed
std::atomic<uint32_t>& atomic_pgs_finished // atomic counter to count how many have been executed (to send a end signal)
){
// Initialize variables
std::tuple<int64_t, int64_t, uint8_t> next_query;
std::string output = "";
int64_t lower, upper;
// Set the first iteration to false for the very first query
// This flips on the second iteration to reuse pgs_executed with true values and so on...
bool iter_bool = false;
// Execute as long as valid queries are received
while(true) {
// Get next query
next_query = query.pop_front();
// Stop Condition reached terminate thread
if (std::get<2>(next_query) == uint8_t(-1)) break;
// "Parse query" to query the objects in pgs
lower = std::get<0>(next_query);
upper = std::get<1>(next_query);
// Now iterate through the pgs and pgs_executed (once)
for (uint32_t i = 0; i < pgs.size(); i++){
// Lock to read and write into pgs_executed
pgs_executed_mutex.lock();
if (pgs_executed[i] == iter_bool) {
pgs_executed[i] = !pgs_executed[i];
// Unlock since we now execute the query on the object (which was not queried before)
pgs_executed_mutex.unlock();
// Query Execution
output = pgs.at(i).get_result(lower, upper);
// If the query yielded a result, then add it to the output for the main thread to read
if (output.length() != 0) {
output_queue.push_back(output);
}
// HERE THE ROOT CAUSE OF THE DEADLOCK HAPPENS
// Here i would like to inform the main thread that we exexuted the query on
// every object in pgs, so that it should no longer wait for other results
if (++atomic_pgs_finished >= pgs.size()) {
// In this if-clause multiple threads are present at once!
// This is not intended and causes a deadlock, push_back-ing
// multiple times "LAST_RESULT_IDENTIFIER" in-which the main-thread
// assumed that a query has finished. The main thread then simply added the next query, while the
// previous one was not finished causing threads to race each other on two queries simultaneously
// and not having the same iter_bool!
output_queue.push_back("LAST_RESULT_IDENTIFIER");
atomic_pgs_finished.exchange(0);
}
// END: HERE THE ROOT CAUSE OF THE DEADLOCK HAPPENS
} else {
// This case happens when the next element in the list was already executed (by another process),
// simply unlock pgs_executed and continue with the next element in pgs
pgs_executed_mutex.unlock();
continue; // This is uneccessary and could be removed
}
}
//finally flip for the next query in order to reuse bool* (which now has trues if a second query is incoming)
iter_bool = !iter_bool;
}
}

Related

Atomically increment and assign to another atomic

Suppose I have some global:
std::atomic_int next_free_block;
and a number of threads each with access to a
std::atomic_int child_offset;
that may be shared between threads. I would like to allocate free blocks to child offsets in a contiguous manner, that is, I want to perform the following operation atomically:
if (child_offset != 0) child_offset = next_free_block++;
Obviously the above implementation does not work as multiple threads may enter the body of the if statement and then try to assign different blocks to child_offset.
I have also considered the following:
int expected = child_offset;
do {
if (expected == 0) break;
int updated = next_free_block++;
} while (!child_offset.compare_exchange_weak(&expected, updated);
But this also doesn't work because if the CAS fails, the side effect of incrementing next_free_block remains even if nothing is assigned to child_offset. This leaves gaps in the allocation of free blocks.
I am aware that I could do this with a mutex (or some kind of spin lock) around each child_offset and potentially DCLP, but I would like to know if this is possible to implement efficiently with atomic operations.
The use case for this is as follows: I have a large tree that I'm building in parallel. The tree is an array of the following:
struct tree_page {
atomic<uint32_t> allocated;
uint32_t child_offset[8];
uint32_t nodes[1015];
};
The tree is built level by level: first the nodes at depth 0 are created, then at depth 1, etc. A separate thread is dispatched for each non-leaf node at the previous step. If no more space is left in a page, a new page is allocated from the global next_free_page which points to the first unused page in the array of struct tree_page and is assigned to an element of child_ptr. A bit field is then set in the node word that indicates which element of the child_ptr array should be used to find the node's children.
The code I am trying to write looks like this:
int expected = allocated.load(relaxed), updated;
do {
updated = expected + num_children;
if (updated > NODES_PER_PAGE) {
expected = -1; break;
}
} while (!allocated.compare_exchange_weak(&expected, updated));
if (expected != -1) {
// successfully allocated in the same page
} else {
for (int i = 0; i < 8; ++i) {
// this is the operation I would like to be atomic
if (child_offset[i] == 0)
child_offset[i] = next_free_block++;
int offset = try_allocating_at_page(pages[child_offset[i]]);
if (offset != -1) {
// successfully allocated at child_offset i
// ...
break;
}
}
}
As far as I understood from you description you array of child_offset is filled with 0 initially and then filled with some concrete values concurrently by different threads.
In this case you can atomically "tag" value first and if you are successful assign valid value. Something like this:
constexpr int INVALID_VALUE = -1;
for (int i = 0; i < 8; ++i) {
int expected = 0;
// this is the operation I would like to be atomic
if (child_offset[i].compare_exchange_weak(expected, INVALID_VALUE)) {
child_offset[i] = next_free_block++;
}
// Not sure if this is needed in your environment, but just in case
if (child_offset[i] == INVALID_VALUE) continue;
...
}
This doesn't guarantee that all values in child_offset array will be in ascending order. But if you need that why not fill it without multithreading involved?

Keep track of the timer repeatedly in C++

I've a library which accepts a vector of strings as argument & pushes it to a vector of vector of strings (2D vector).
I've a condition that "right from the time 1st row is pushed to the 2D vector the timer should start & for every "X" th milisecond I want to append some special string "FLAG_BORDER" at the end of the vector of string that i receive as argument to library.
LIBRARY CODE
using namespace std;
vector<vector<sring>> vecOfVecStrs;
MyLibraryFunction(vector<string> vecstr)
{
if(TimerLimitNOTHit()) // THIS FUNCTION TimerLimitNOTHit() checks if the "X"
// th milisecond is reached or not.
{
vecOfVecStrs.push_back(vecstr); // If "X"th ms time poeriod is NOT
// yet reached
}
else
{
vecstr.push_back("FLAG_BORDER");
vecOfVecStrs.push_back(vecstr);
}
APPLICATION CODE CALLING LIBRARY FUNCTION::
int main()
{
do_something();
vector<string> vecStr;
while(vecStr = Get_VecTor_of_strings())
{
MyLibraryFunction(vecStr);
}
How do I implement the function TimerLimitNOTHit() here which should
keep track of the timer "X" milisec across function calls to
"MyLibraryFunction()" in C++ ?
EDITED: SEEMS LIKE FOUND AN ANSWER. wILL IT WORK ?
int timeX = 30; //s
bool keepTrack = false;
int ci = 0;
std::vector<int> timeArray;
std::mutex m;
bool keepTracking()
{
std::unique_lock<std::mutex> lock(m);
return keepTrack;
}
void GlobalTimer()
{
while (keepTracking())
{
std::this_thread::sleep_for(std::chrono::seconds(timeX));
timeArray[ci] = 1;
}
}
std::thread t1(GlobalTimer);
Here is some inspiration, where the function TimerLimitNOTHit returns true if more than X ms has passed since it last returned true.
int X = ... // #0
bool TimerLimitNOTHit()
{
static auto start = std::chrono::steady_clock::now(); // #1
auto now = std::chrono::steady_clock::now();
if (now - start > std::chrono::milliseconds(X))
{
start = now // #2
return true;
}
return false
}
#0 Defining the distance in time between two calls to TimerLimitNOTHit that return true.
#1 Initialization depend on your application logic. In this example the function does not return true on the first call, this can changed by initializing start to zero.
#2 start value for the next iteration, again it depends on your application logic. If you want a more steady true you could do some modulo arithmetic.
Let me know if this is not what you were looking for.
Disclaimer
I don't really like the use of static variables, but without a "status" parameter to the function I don't see how it can be avoided.
Furthermore, the use of the global X variable does not fall to my likening neither. The variable X could be changed to a template variable, this better show intent and makes it a compile time constant.

Multithread queue atomic operations

I'm playing with the std::atomic structures and wrote this lock-free multi-producer multi-consumer queue, which I'm attaching here. The idea for the queue is based on two stacks - a producer and consumer stack, which are essentially linked list structures. The nodes of the lists hold the indexes into an array of that holds the actual data, where you would read or write.
The idea would be that the nodes for the lists are mutually exclusive, ie, a pointer to a node can exist only in the producer or the consumer list. A producer would attempt to acquire a node from the producer list, a consumer from the consumer list and whenever a pointer to a node is acquired by either producer or consumer, it should be out of both lists so that noone else could acquire it. I'm using std::atomic_compare_exchange functions to spin until a node is popped.
The problem is that there must be something wrong with the logic or the operations are not atomic as I assume them to be because even with 1 producer and 1 consumer, given enough time, the queue will livelock and what I have noticed is that if you assert that cell != cell->m_next, the assert would get hit ! So its probably something staring me in the face and I just don't see it, so I wonder if someone could pitch in.
Thx
#ifndef MTQueue_h
#define MTQueue_h
#include <atomic>
template<typename Data, uint64_t queueSize>
class MTQueue
{
public:
MTQueue() : m_produceHead(0), m_consumeHead(0)
{
for(int i=0; i<queueSize-1; ++i)
{
m_nodes[i].m_idx = i;
m_nodes[i].m_next = &m_nodes[i+1];
}
m_nodes[queueSize-1].m_idx = queueSize - 1;
m_nodes[queueSize-1].m_next = NULL;
m_produceHead = m_nodes;
m_consumeHead = NULL;
}
struct CellNode
{
uint64_t m_idx;
CellNode* m_next;
};
bool push(const Data& data)
{
if(m_produceHead == NULL)
return false;
// Pop the producer list.
CellNode* cell = m_produceHead;
while(!std::atomic_compare_exchange_strong(&m_produceHead,
&cell, cell->m_next))
{
cell = m_produceHead;
if(!cell)
return false;
}
// At this point cell should point to a node that is not in any of the lists
m_data[cell->m_idx] = data;
// Push that node as the new head of the consumer list
cell->m_next = m_consumeHead;
while (!std::atomic_compare_exchange_strong(&m_consumeHead,
&cell->m_next, cell))
{
cell->m_next = m_consumeHead;
}
return true;
}
bool pop(Data& data)
{
if(m_consumeHead == NULL)
return false;
// Pop the consumer list
CellNode* cell = m_consumeHead;
while(!std::atomic_compare_exchange_strong(&m_consumeHead,
&cell, cell->m_next))
{
cell = m_consumeHead;
if(!cell)
return false;
}
// At this point cell should point to a node that is not in any of the lists
data = m_data[cell->m_idx];
// Push that node as the new head of the producer list
cell->m_next = m_produceHead;
while(!std::atomic_compare_exchange_strong(&m_produceHead,
&cell->m_next, cell))
{
cell->m_next = m_produceHead;
}
return true;
};
private:
Data m_data[queueSize];
// The nodes for the two lists
CellNode m_nodes[queueSize];
volatile std::atomic<CellNode*> m_produceHead;
volatile std::atomic<CellNode*> m_consumeHead;
};
#endif
I see a few problems with your queue implementation:
It's not a queue, it's a stack: the most recent item pushed is the first item popped. Not that there's anything wrong with stacks, but it's confusing to call it a queue. In fact it is two lock-free stacks: one stack that is initially populated with the array of nodes, and another stack that stores actual data elements using the first stack as a list of free nodes.
There is a data race on CellNode::m_next in both push and pop (unsurprisingly, since they both do the same thing, i.e., pop a node from one stack and push that node onto the other). Say two threads simultaneously enter e.g. pop and both read the same value from m_consumeHead. Thread 1 races ahead successfully popping and sets data. Then Thread 1 writes the value of m_produceHead into cell->m_next while Thread 2 is simultaneously reading cell->m_next to pass to std::atomic_compare_exchange_strong_explicit. The simultaneous non-atomic read and write of cell->m_next by two threads is by definition a data race.
This is what is known as a "benign" race in the concurrency literature: a stale/invalid value is read, but never gets used. If you are confident that your code will never need to run on an architecture where it could cause fiery explosions you may ignore it, but for strict conformance with the Standard memory model you need to make m_next an atomic and use at least memory_order_relaxed reads to eliminate the data race.
ABA. The correctness of your compare-exchange loops is based on the premise that an atomic pointer (e.g., m_produceHead and m_consumeHead) having the same value at both the initial load and the later compare-exchange implies that the pointee object must therefore be unchanged as well. This premise does not hold in any design in which it is possible to recycle an object faster than some thread makes a trip through its compare-exchange loop. Consider this sequence of events:
Thread 1 enters pop and reads the value of m_consumeHead and m_consumeHead->m_next but blocks before calling the compare-exchange.
Thread 2 successfully pops that node from m_consumeHead and blocks as well.
Thread 3 pushes several nodes onto m_consumeHead.
Thread 2 unblocks and pushes the original node onto m_produceHead.
Thread 3 pops that node from m_produceHead, and pushes it back onto m_consumeHead.
Thread 1 finally unblocks and calls the compare-exchange function, which succeeds since the value of m_consumeHead is the same. It pops the node - which is all well and good - but sets m_consumeHead to the stale m_next value it read back in step 1. All the nodes pushed by Thread 3 in the meantime are leaked.
I believe I was able to crack this one. No livelock at 1000000 writes/reads for queues from size 2 to 1024 and from 1 producer and 1 consumer to 100 producers / 100 consumers.
Here's the solution. The trick is not to use cell->m_next directly in the compare and swap (the same applies for the producer code by the way) and to require strict memory order rules:
This seems to confirm my suspicion that it was compiler reordering of the reads writes.
Here's the code:
bool push(const TData& data)
{
CellNode* cell = m_produceHead.load(std::memory_order_acquire);
if(cell == NULL)
return false;
while(!std::atomic_compare_exchange_strong_explicit(&m_produceHead,
&cell,
cell->m_next,
std::memory_order_acquire,
std::memory_order_release))
{
if(!cell)
return false;
}
m_data[cell->m_idx] = data;
CellNode* curHead = m_consumeHead;
cell->m_next = curHead;
while (!std::atomic_compare_exchange_strong_explicit(&m_consumeHead,
&curHead,
cell,
std::memory_order_acquire,
std::memory_order_release))
{
cell->m_next = curHead;
}
return true;
}
bool pop(TData& data)
{
CellNode* cell = m_consumeHead.load(std::memory_order_acquire);
if(cell == NULL)
return false;
while(!std::atomic_compare_exchange_strong_explicit(&m_consumeHead,
&cell,
cell->m_next,
std::memory_order_acquire,
std::memory_order_release))
{
if(!cell)
return false;
}
data = m_data[cell->m_idx];
CellNode* curHead = m_produceHead;
cell->m_next = curHead;
while(!std::atomic_compare_exchange_strong_explicit(&m_produceHead,
&curHead,
cell,
std::memory_order_acquire,
std::memory_order_release))
{
cell->m_next = curHead;
}
return true;
};

Debug assertion failed: Subscript out of range with std::vector

I'm trying to fix this problem which seems like I am accessing at an out of range index, but VS fails to stop where the error occurred leaving me confused about what's causing this.
The Error:
Debug Assertion Failed! Program: .... File: c:\program files\microsoft visual studio 10.0\vc\include\vector Line: 1440 Expression: String subscript out of range
What the program does:
There are two threads:
Thread 1:
The first thread looks (amongst other things) for changes in the current window using GetForegroundWindow(), the check happens not on a loop but when a WH_MOUSE_LL event is triggered. The data is split into structs of fixed size so that it can be sent to a server over tcp. The first thread and records the data (Window Title) into an std::list in the current struct.
if(change_in_window)
{
GetWindowTextW(hActWin,wTitle,256);
std::wstring title(wTitle);
current_struct->titles.push_back(title);
}
Thread 2:
The second thread is called looks for structs not send yet, and it puts their content into char buffers so that they can be sent over tcp. While I do not know exactly where the error is, looking from the type of error it was to do either with a string or a list, and this is the only code from my whole application using lists/strings (rest are conventional arrays). Also commenting the if block as mentioned in the code comments stops the error from happening.
BOOL SendStruct(DATABLOCK data_block,bool sycn)
{
[..]
int _size = 0;
// Important note, when this if block is commented the error ceases to exist, so it has something to do with the following block
if(!data_block.titles.empty()) //check if std::list is empty
{
for (std::list<std::wstring>::iterator itr = data_block.titles.begin(); itr != data_block.titles.end() ; itr++) {
_size += (((*itr).size()+1) * 2);
} //calculate size required. Note the +1 is for an extra character between every title
wchar_t* wnd_wbuffer = new wchar_t[_size/2](); //allocate space
int _last = 0;
//loop through every string and every char of a string and write them down
for (std::list<std::wstring>::iterator itr = data_block.titles.begin(); itr != data_block.titles.end(); itr++)
{
for(unsigned int i = 0; i <= (itr->size()-1); i++)
{
wnd_wbuffer[i+_last] = (*itr)[i] ;
}
wnd_wbuffer[_last+itr->size()] = 0x00A6; // separator
_last += itr->size()+1;
}
unsigned char* wnd_buffer = new unsigned char[_size];
wnd_buffer = (unsigned char*)wnd_wbuffer;
h_io->header_w_size = _size;
h_io->header_io_wnd = 1;
Connect(mode,*header,conn,buffer_in_bytes,wnd_buffer,_size);
delete wnd_wbuffer;
}
else
[..]
return true;
}
My attempt at thread synchronization:
There is a pointer to the first data_block created (db_main)
pointer to the current data_block (db_cur)
//datablock format
typedef struct _DATABLOCK
{
[..]
int logs[512];
std::list<std::wstring> titles;
bool bPrsd; // has this datablock been sent true/false
bool bFull; // is logs[512] full true/false
[..]
struct _DATABLOCK *next;
} DATABLOCK;
//This is what thread 1 does when it needs to register a mouse press and it is called like this:
if(change_in_window)
{
GetWindowTextW(hActWin,wTitle,256);
std::wstring title(wTitle);
current_struct->titles.push_back(title);
}
RegisterMousePress(args);
[..]
//pseudo-code to simplify things , although original function does the exact same thing.
RegisterMousePress()
{
if(it_is_full)
{
db_cur->bFull= true;
if(does db_main exist)
{
db_main = new DATABLOCK;
db_main = db_cur;
db_main->next = NULL;
}
else
{
db_cur->next = new DATABLOCK;
db_cur = db_cur->next;
db_cur->next = NULL;
}
SetEvent(eProcessed); //tell thread 2 there is at least one datablock ready
}
else
{
write_to_it();
}
}
//this is actual code and entry point of thread 2 and my attempy at synchronization
DWORD WINAPI InitQueueThread(void* Param)
{
DWORD rc;
DATABLOCK* k;
SockWClient writer;
k = db_main;
while(true)
{
rc=WaitForSingleObject(eProcessed,INFINITE);
if (rc== WAIT_OBJECT_0)
{
do
{
if(k->bPrsd)
{
continue;
}
else
{
if(!k)
{break;}
k->bPrsd = TRUE;
#ifdef DEBUG_NET
SendStruct(...);
#endif
}
if(k->next == NULL || k->next->bPrsd ==TRUE || !(k->next->bFull))
{
ResetEvent(eProcessed);
break;
}
} while (k = k->next); // next element after each loop
}
}
return 1;
}
Details:
Now something makes me believe that the error is not in there, because the substring error is very rare. I have been only able to reproduce it with 100% chance when pressing Mouse_Down+Wnd+Tab to scroll through windows and keeping it pressed for some time (while it certainly happened on other cases as well). I avoid posting the whole code because it's a bit large and confusion is unavoidable. If the error is not here I will edit the post and add more code.
Thanks in advance
There does not appear to be any thread synchronization here. If one thread reads from the structure while the other writes, it might be read during initialization, with a non-empty list containing an empty string (or something invalid, in between).
If there isn't a mutex or semaphore outside the posted function, that is likely the problem.
All the size calculations appear to be valid for Windows, although I didn't attempt to run it… and <= … -1 instead of < in i <= (itr->size()-1) and 2 instead of sizeof (wchar_t) in new wchar_t[_size/2](); are a bit odd.
The problem with your code is that while thread 2 correctly waits for the data and thread 1 correctly notifies about them, thread 2 doesn't prevent thread 1 from doing anything with them under its hands while it still process the data. The typical device used to solve such problem is the monitor pattern.
It consist of one mutex (used to protect the data, held anytime you access them) and a condition variable (=Event in Windows terms), which will convey the information about new data to the consumer.
The producer would normally obtain the mutex, produce the data, release the mutex, then fire the event.
The consumer is more tricky - it has to obtain the mutex, check if new data hasn't become available, then wait for the Event using the SignalObjectAndWait function that temporarily releases the mutex, then process newly acquired data, then release the mutex.

How to lock Queue variable address instead of using Critical Section?

I have 2 threads and global Queue, one thread (t1) push the data and another one(t2) pops the data, I wanted to sync this operation without using function where we can use that queue with critical section using windows API.
The Queue is global, and I wanted to know how to sync, is it done by locking address of Queue?
Is it possible to use Boost Library for the above problem?
One approach is to have two queues instead of one:
The producer thread pushes items to queue A.
When the consumer thread wants to pop items, queue A is swapped with empty queue B.
The producer thread continues pushing items to the fresh queue A.
The consumer, uninterrupted, consumes items off queue B and empties it.
Queue A is swapped with queue B etc.
The only locking/blocking/synchronization happens when the queues are being swapped, which should be a fast operation since it's really a matter of swapping two pointers.
I thought you could make a queue with those conditions without using any atomics or any thread safe stuff at all?
like if its just a circle buffer, one thread controls the read pointer, and the other controls the write pointer. both don't update until they are finished reading or writing. and it just works?
the only point of difficulty comes with determining when read==write whether the queue is full or empty, but you can overcome this by just having one dummy item always in the queue
class Queue
{
volatile Object* buffer;
int size;
volatile int readpoint;
volatile int writepoint;
void Init(int s)
{
size = s;
buffer = new Object[s];
readpoint = 0;
writepoint = 1;
}
//thread A will call this
bool Push(Object p)
{
if(writepoint == readpoint)
return false;
int wp = writepoint - 1;
if(wp<0)
wp+=size;
buffer[wp] = p;
int newWritepoint = writepoint + 1;
if(newWritepoint==size)
newWritePoint = 0;
writepoint = newWritepoint;
return true;
}
// thread B will call this
bool Pop(Object* p)
{
writepointTest = writepoint;
if(writepointTest<readpoint)
writepointTest+=size;
if(readpoint+1 == writepoint)
return false;
*p = buffer[readpoint];
int newReadpoint = readpoint + 1;
if(newReadpoint==size)
newReadPoint = 0;
readpoint = newReadPoint;
return true;
}
};
Another way to handle this issue is to allocate your queue dynamically and assign it to a pointer. The pointer value is passed off between threads when items have to be dequeued, and you protect this operation with a critical section. This means locking for every push into the queue, but much less contention on the removal of items.
This works well when you have many items between enqueueing and dequeueing, and works less well with few items.
Example (I'm using some given RAII locking class to do the locking). Also note...really only safe when only one thread dequeueing.
queue* my_queue = 0;
queue* pDequeue = 0;
critical_section section;
void enqueue(stuff& item)
{
locker lock(section);
if (!my_queue)
{
my_queue = new queue;
}
my_queue->add(item);
}
item* dequeue()
{
if (!pDequeue)
{ //handoff for dequeue work
locker lock(section);
pDequeue = my_queue;
my_queue = 0;
}
if (pDequeue)
{
item* pItem = pDequeue->pop(); //remove item and return it.
if (!pItem)
{
delete pDequeue;
pDequeue = 0;
}
return pItem;
}
return 0;
}