Win32 alternative to pthread - c++

Is it possible to write this using the standard win32 CreateMutex style code. I am just wondering if I want to introduce a new library to our application or if I can find a way to write this myself. I just can't figure out how to to the wait inside a CriticalSection. This is my current working code with the pthread library.
T remove() {
pthread_mutex_lock(&m_mutex);
while (m_queue.size() == 0) {
pthread_cond_wait(&m_condv, &m_mutex);
}
T item = m_queue.front();
m_queue.pop_front();
pthread_mutex_unlock(&m_mutex);
return item;
}

For pre-VC-2012 support, the best alternative is Boost.Thread that supports conditional variables.

Here's my attempt. This is not the best implementation of a conditional wait lock in win32, but I think it works. It could use careful code review scrutiny.
One caveat - it doesn't necessarily guarantee ordered fairness since all the waiting threads may be initially blocked waiting for the event. The scheduler will resume all the threads at this point to continue running (up to the subsequent blocking EnterCriticalSection call), but not necessarily in the same order in which the threads arrived into the remove() call to begin with. This likely isn't a big deal for most app's with only a handful of threads, but it's something most threading frameworks guarantee.
Other caveat - for brevity, I'm leaving out the important steps of checking the return value from all of these Win32 APIs.
CRITICAL_SECTION m_cs;
HANDLE m_event;
void Init()
{
InitializeCriticalSection(&m_cs);
m_event = CreateEvent(NULL, TRUE, FALSE, NULL); // manual reset event
}
void UnInit()
{
DeleteCriticalSection(&m_cs);
CloseHandle(m_event);
m_event = NULL;
}
T remove()
{
T item;
bool fGotItem = false;
while (fGotItem == false)
{
// wait for event to be signaled
WaitForSingleObject(m_event, INFINITE);
// wait for mutex to become available
EnterCriticalSection(&m_cs);
// inside critical section
{
// try to deque something - it’s possible that the queue is empty because another
// thread pre-empted us and got the last item in the queue before us
size_t queue_size = m_queue.size();
if (queue_size == 1)
{
// the queue is about to go empty
ResetEvent(m_event);
}
if (queue_size > 0)
{
fGotItem = true;
item = m_queue.front();
m_queue.pop();
}
}
LeaveCriticalSection(&m_cs);
}
return item;
}
void Add(T& item)
{
// wait for critical section to become available
EnterCriticalSection(&m_cs);
// inside critical section
{
m_queue.push_back(item);
SetEvent(m_event); // signal other threads that something is available
}
LeaveCriticalSection(&m_cs);
}

Windows Vista introduced new native Win32 Conditional Variable and Slim Reader/Writer Lock primitives for exactly this type of scenario, for example:
Using a critical section:
CRITICAL_SECTION m_cs;
CONDITION_VARIABLE m_condv;
InitializeCriticalSection(&m_cs);
InitializeConditionVariable(&m_condv);
...
void add(T item)
{
EnterCriticalSection(&m_cs);
m_queue.push_back(item);
LeaveCriticalSection(&m_cs);
WakeConditionVariable(&m_condv);
}
T remove()
{
EnterCriticalSection(&m_cs);
while (m_queue.size() == 0)
SleepConditionVariableCS(&m_condv, &m_cs, INFINITE);
T item = m_queue.front();
m_queue.pop_front();
LeaveCriticalSection(&m_cs);
return item;
}
Using a SRW lock:
SRWLOCK m_lock;
CONDITION_VARIABLE m_condv;
InitializeSRWLock(&m_lock);
InitializeConditionVariable(&m_condv);
...
void add(T item)
{
AcquireSRWLockExclusive(&m_lock);
m_queue.push_back(item);
ReleaseSRWLockExclusive(&m_lock);
WakeConditionVariable(&m_condv);
}
T remove()
{
AcquireSRWLockExclusive(&m_lock);
while (m_queue.size() == 0)
SleepConditionVariableSRW(&m_condv, &m_lock, INFINITE, 0);
T item = m_queue.front();
m_queue.pop_front();
ReleaseSRWLockExclusive(&m_lock);
return item;
}

Related

Handle mutex lock in callback c++

I've got a Timer class that can run with both an initial time and an interval. There's an internal function internalQuit performs thread.join() before a thread is started again on the resetCallback. The thing is that each public function has it's own std::lock_guard on the mutex to prevent the data of being written. I'm now running into an issue that when using the callback to for example stop the timer in the callback, the mutex cannot be locked by stop(). I'm hoping to get some help on how to tackle this issue.
class Timer
{
public:
Timer(string_view identifier, Function &&timeoutHandler, Duration initTime, Duration intervalTime);
void start()
void stop() // for example
{
std::lock_guard lock{mutex};
running = false;
sleepCv.notify_all();
}
void setInitTime()
void setIntervalTime()
void resetCallback(Function &&timeoutHandler)
{
internalQuit();
{
std::lock_guard lock{mutex};
quit = false;
}
startTimerThread(std::forward<Function>(timeoutHandler));
}
private:
internalQuit() // performs thread join
{
{
std::lock_guard lock {mutex};
quit = true;
running = false;
sleepCv.notify_all();
}
thread.join();
}
mainLoop(Function &&timeoutHandler)
{
while(!quit)
{
std::unique_lock lock{mutex};
// wait for running with sleepCv.wait()
// handle initTimer with sleepCv.wait_until()
timeoutHandler(); // callback
// handle intervalTimer with sleepCv.wait_until()
timeoutHandler(); // callback
}
}
startTimerThread(Function &&timeoutHandler)
{
thread = std::thread([&, timeoutHandler = std::forward<Function>(timeoutHandler)](){
mainLoop(timeoutHandler);
});
}
std::thread thread{};
std::mutex mutex{};
std::condition_variable sleepCv{}
// initTime, intervalTime and some booleans for updating with sleepCv.notify_all();
}
For testing this, I have the following testcase in Gtest. I'm expecting the timer to stop in the callback. Unfortunately, the timer will hang on acquiring the mutex lock in the stop() function.
std::atomic<int> callbackCounter;
void timerCallback()
{
callbackCounter.fetch_add(1, std::memory_order_acq_rel);
}
TEST(timerTest, timerShouldStopWhenStoppedInNewCallback)
{
std::atomic<int> testCounter{0};
Timer<std::chrono::steady_clock > t{"timerstop", &timerCallback, std::chrono::milliseconds(0), std::chrono::milliseconds(100)};
t.resetCallback([&]{
testCounter += 1;
t.stop();
});
t.start();
sleepMilliSeconds(100);
ASSERT_EQ(testCounter.load(), 1); // trigger due to original interval timeout
sleepMilliSeconds(100);
ASSERT_EQ(testCounter.load(), 1); // no trigger, because stopped in new callback
}
Removing all the mutexes in each of the public fucntions, fixes the issue. But that could lead to possible race conditions for data being written to variables. Hence each function has a lock before writing to f.e. the booleans.
I've tried looking into the std::move functionality to move the thread during the resetCallback into a different variable and then call join on that one. I'm also investigating recursive_mutex but have no experience with using that.
void resetCallback(Function &&timeoutHandler)
{
internalQuit();
{
std::lock_guard lock{mutex};
quit = false;
}
auto prevThread = std::thread(std::move(this->thread));
// didn't know how to continue from here, requiring more selfstudy.
startTimerThread(std::forward<Function>(timeoutHandler));
}
It's a new subject for me, have worked with mutexes and timers before but with relatively simple stuff.
Thank you in advance.

What C++'s equivalent to winapi's MsgWaitForMultipleObjectsEx

I'm making the transition from using native Win32 API calls to manage my thread's message queue to using my own C++ code. I have encountered a question which I can't fully answer.
Given the following code snippet
LRESULT QueueConsumeThread()
{
MSG msg = { 0 };
HANDLE hHandles[] = { hHandle1, hHandle2 };
while (true)
{
DWORD dwRes;
switch (dwRes = ::MsgWaitForMultipleObjects(_countof(hHandles), hHandles, FALSE, INFINITE, QS_ALLEVENTS))
{
case WAIT_OBJECT_0 :
DoSomething();
break;
case WAIT_OBJECT_0 + 1:
DoSomething2();
break;
case WAIT_OBJECT_0 + _countof(hHandles):
ATLASSERT(msg.message == WM_QUIT);
return 1;
}
}
return 1;
}
I have read in many sources that a particular thread should be a associated with a single condition_variable, also that using multiple condition_variables or invoking wait_for() or wait_until() doesn't sound too efficient.
The following source suggested implementing a safe_queue using condition_variables. I guess that PeekMessage/GetMessage/MsgWaitForMultipleObject work similarly, but what kind of data should each cell of the queue hold and be able to receive event signals?
Edit: I'm asking this as I have to write a cross-platform application.
Contrary to windows synhronization events (which can be in signalled state) std::condition_variable is decoupled from the state. So, the most natural approach would be to define several conditions and wait/report them with the single condition_variable:
std::unique_lock<std::mutex> lock(m);
cv.wait(lock, []{ return ready1 || ready2 || ready3; });
if (ready1) { ... }
if (ready2) { ... }
if (ready3) { ... }
std::unique_lock<std::mutex> lock(m);
ready1 = true;
cv.notify_one();

C++ Lock a mutex as if from another thread?

I'm writing an Audio class that holds an std::thread for refilling some buffers asynchronously. Say we call the main thread A and the background (class member) thread B. I'm using an std::mutex to block thread B whenever the sound is not playing, that way it doesn't run in the background when unnecessary and doesn't use excess CPU power. The mutex locked by thread A by default, so thread B is blocked, then when it's time to play the sound thread A unlocks the mutex and thread B runs (by locking then immediately unlocking it) in a loop.
The issue comes up when thread B sees that it's reached the end of the file. It can stop playback and clean up buffers and such, but it can't stop its own loop because thread B can't lock the mutex from thread A.
Here's the relevant code outline:
class Audio {
private:
// ...
std::thread Thread;
std::mutex PauseMutex; // mutex that blocks Thread, locked in constructor
void ThreadFunc(); // assigned to Thread in constructor
public:
// ...
void Play();
void Stop();
}
_
void Audio::ThreadFunc() {
// ... (include initial check of mutex here)
while (!this->EndThread) { // Thread-safe flag, only set when Audio is destructed
// ... Check and refill buffers as necessary, etc ...
if (EOF)
Stop();
// Attempt a lock, blocks thread if sound/music is not playing
this->PauseMutex.lock();
this->PauseMutex.unlock();
}
}
void Audio::Play() {
// ...
PauseMutex.unlock(); // unlock mutex so loop in ThreadFunc can start
}
void Audio::Stop() {
// ...
PauseMutex.lock(); // locks mutex to stop loop in ThreadFunc
// ^^ This is the issue here
}
In the above setup, when the background thread sees that it's reached EOF, it would call the class's Stop() function, which supposedly locks the mutex to stop the background thread. This doesn't work because the mutex would have to be locked by the main thread, not the background thread (in this example, it crashes in ThreadFunc because the background thread attempts a lock in its main loop after already locking in Stop()).
At this point the only thing I could think of would be to somehow have the background thread lock the mutex as if it was the main thread, giving the main thread ownership of the mutex... if that's even possible? Is there a way for a thread to transfer ownership of a mutex to another thread? Or is this a design flaw in the setup I've created? (If the latter, are there any rational workarounds?) Everything else in the class so far works just as designed.
I'm not going to even pretend to understand how your code is trying to do what it is doing. There is one thing, however, that is evident. You're trying to use a mutex for conveying some predicate state change, which is the wrong vehicle to drive on that freeway.
Predicate state change is handled by coupling three things:
Some predicate datum
A mutex to protect the predicate
A condition variable to convey possible change in predicate state.
The Goal
The goal in the below example is to demonstrate how a mutex, a condition variable, and predicate data are used in concert when controlling program flow across multiple threads. It shows examples of using both wait and wait_for condition variable functionality, as well as one way to run a member function as a thread proc.
Following is a simple Player class toggles between four possible states:
Stopped : The player is not playing, nor paused, nor quitting.
Playing : The player is playing
Paused : The player is paused, and will continue from whence it left off once it resumes Playing.
Quit : The player should stop what it is doing and terminate.
The predicate data is fairly obvious. the state member. It must be protected, which means it cannot be changed nor checked unless under the protection of the mutex. I've added to this a counter that simply increments during the course of maintaining the Playing state for some period of time. more specifically:
While Playing, each 200ms the counter increments, then dumps some data to the console.
While Paused, counter is not changed, but retains its last value while Playing. This means when resumed it will continue from where it left off.
When Stopped, the counter is reset to zero and a newline is injected into the console output. This means switching back to Playing will start the counter sequence all over again.
Setting the Quit state has no effect on counter, it will be going away along with everything else.
The Code
#include <iostream>
#include <mutex>
#include <condition_variable>
#include <thread>
#include <unistd.h>
using namespace std::chrono_literals;
struct Player
{
private:
std::mutex mtx;
std::condition_variable cv;
std::thread thr;
enum State
{
Stopped,
Paused,
Playing,
Quit
};
State state;
int counter;
void signal_state(State st)
{
std::unique_lock<std::mutex> lock(mtx);
if (st != state)
{
state = st;
cv.notify_one();
}
}
// main player monitor
void monitor()
{
std::unique_lock<std::mutex> lock(mtx);
bool bQuit = false;
while (!bQuit)
{
switch (state)
{
case Playing:
std::cout << ++counter << '.';
cv.wait_for(lock, 200ms, [this](){ return state != Playing; });
break;
case Stopped:
cv.wait(lock, [this]() { return state != Stopped; });
std::cout << '\n';
counter = 0;
break;
case Paused:
cv.wait(lock, [this]() { return state != Paused; });
break;
case Quit:
bQuit = true;
break;
}
}
}
public:
Player()
: state(Stopped)
, counter(0)
{
thr = std::thread(std::bind(&Player::monitor, this));
}
~Player()
{
quit();
thr.join();
}
void stop() { signal_state(Stopped); }
void play() { signal_state(Playing); }
void pause() { signal_state(Paused); }
void quit() { signal_state(Quit); }
};
int main()
{
Player player;
player.play();
sleep(3);
player.pause();
sleep(3);
player.play();
sleep(3);
player.stop();
sleep(3);
player.play();
sleep(3);
}
Output
I can't really demonstrate this. You'll have to run it and see how it works, and I invite you to toy with the states in main() as I have above. Do note, however, that once quit is invoked none of the other stated will be monitored. Setting the Quit state will shut down the monitor thread. For what its worth, a run of the above should look something like this:
1.2.3.4.5.6.7.8.9.10.11.12.13.14.15.16.17.18.19.20.21.22.23.24.25.26.27.28.29.30.
1.2.3.4.5.6.7.8.9.10.11.12.13.14.15.
with the first set of numbers dumped in two groups (1..15, then 16..30), as a result of playing, then pausing, then playing again. Then a stop is issued, followed by another play for a period of ~3 seconds. After that, the object self-destructs, and in doing so, sets the Quit state, and waits for the monitor to terminate.
Summary
Hopefully you get something out of this. If you find yourself trying to manage predicate state by manually latching and releasing mutexes, changes are you need a condition-variable design patter to facilitate detecting those changes.
Hope you get something out of it.
class CtLockCS
{
public:
//--------------------------------------------------------------------------
CtLockCS() { ::InitializeCriticalSection(&m_cs); }
//--------------------------------------------------------------------------
~CtLockCS() { ::DeleteCriticalSection(&m_cs); }
//--------------------------------------------------------------------------
bool TryLock() { return ::TryEnterCriticalSection(&m_cs) == TRUE; }
//--------------------------------------------------------------------------
void Lock() { ::EnterCriticalSection(&m_cs); }
//--------------------------------------------------------------------------
void Unlock() { ::LeaveCriticalSection(&m_cs); }
//--------------------------------------------------------------------------
protected:
CRITICAL_SECTION m_cs;
};
///////////////////////////////////////////////////////////////////////////////
// class CtLockMX - using mutex
class CtLockMX
{
public:
//--------------------------------------------------------------------------
CtLockMX(const TCHAR* nameMutex = 0)
{ m_mx = ::CreateMutex(0, FALSE, nameMutex); }
//--------------------------------------------------------------------------
~CtLockMX()
{ if (m_mx) { ::CloseHandle(m_mx); m_mx = NULL; } }
//--------------------------------------------------------------------------
bool TryLock()
{ return m_mx ? (::WaitForSingleObject(m_mx, 0) == WAIT_OBJECT_0) : false; }
//--------------------------------------------------------------------------
void Lock()
{ if (m_mx) { ::WaitForSingleObject(m_mx, INFINITE); } }
//--------------------------------------------------------------------------
void Unlock()
{ if (m_mx) { ::ReleaseMutex(m_mx); } }
//--------------------------------------------------------------------------
protected:
HANDLE m_mx;
};
///////////////////////////////////////////////////////////////////////////////
// class CtLockSM - using semaphore
class CtLockSM
{
public:
//--------------------------------------------------------------------------
CtLockSM(int maxcnt) { m_sm = ::CreateSemaphore(0, maxcnt, maxcnt, 0); }
//--------------------------------------------------------------------------
~CtLockSM() { ::CloseHandle(m_sm); }
//--------------------------------------------------------------------------
bool TryLock() { return m_sm ? (::WaitForSingleObject(m_sm, 0) == WAIT_OBJECT_0) : false; }
//--------------------------------------------------------------------------
void Lock() { if (m_sm) { ::WaitForSingleObject(m_sm, INFINITE); } }
//--------------------------------------------------------------------------
void Unlock()
{
if (m_sm){
LONG prevcnt = 0;
::ReleaseSemaphore(m_sm, 1, &prevcnt);
}
}
//--------------------------------------------------------------------------
protected:
HANDLE m_sm;
};

Waiting until another process locks and then unlocks a Win32 mutex

I am trying to tell when a producer process accesses a shared windows mutex. After this happens, I need to lock that same mutex and process the associated data. Is there a build in way in Windows to do this, short of a ridiculous loop?
I know the result of this is doable through creating a custom Windows event in the producer process, but I want to avoid changing this programs code as much as possible.
What I believe will work (in a ridiculously inefficient way) would be this (NOTE: this is not my real code, I know there are like 10 different things very wrong with this; I want to avoid doing anything like this):
#include <Windows.h>
int main() {
HANDLE h = CreateMutex(NULL, 0, "name");
if(!h) return -1;
int locked = 0;
while(true) {
if(locked) {
//can assume it wont be locked longer than a second, but even if it does should work fine
if(WaitForSingleObject(h, 1000) == WAIT_OBJECT_0) {
// do processing...
locked = 0;
ReleaseMutex(h);
}
// oh god this is ugly, and wastes so much CPU...
} else if(!(locked = WaitForSingleObject(h, 0) == WAIT_TIMEOUT)) {
ReleaseMutex(h);
}
}
return 0;
}
If there is an easier way with C++ for whatever reason, my code is actually that. This example was just easier to construct in C.
You will not be able to avoid changing the producer if efficient sharing is needed. Your design is fundamentally flawed for that.
A producer needs to be able to signal a consumer when data is ready to be consumed, and to make sure it does not alter the data while it is busy being consumed. You cannot do that with a single mutex alone.
The best way is to have the producer set an event when data is ready, and have the consumer reset the event when the data has been consumed. Use the mutex only to sync access to the data, not to signal the data's readiness.
#include <Windows.h>
int main()
{
HANDLE readyEvent = CreateEvent(NULL, TRUE, FALSE, "ready");
if (!readyEvent) return -1;
HANDLE mutex = CreateMutex(NULL, FALSE, "name");
if (!mutex) return -1;
while(true)
{
if (WaitForSingleObject(readyEvent, 1000) == WAIT_OBJECT_0)
{
if (WaitForSingleObject(mutex, 1000) == WAIT_OBJECT_0)
{
// process as needed...
ResetEvent(readyEvent);
ReleaseMutex(mutex);
}
}
}
return 0;
}
If you can't change the producer to use an event, then at least add a flag to the data itself. The producer can lock the mutex, update the data and flag, and unlock the mutex. Consumers will then have to periodically lock the mutex, check the flag and read the new data if the flag is set, reset the flag, and unlock the mutex.
#include <Windows.h>
int main()
{
HANDLE mutex = CreateMutex(NULL, FALSE, "name");
if (!mutex) return -1;
while(true)
{
if (WaitForSingleObject(mutex, 1000) == WAIT_OBJECT_0)
{
if (ready)
{
// process as needed...
ready = false;
}
ReleaseMutex(mutex);
}
}
return 0;
}
So either way, your logic will have to be tweaked in both the producer and consumer.
Otherwise, if you can't change the producer at all, then you have no choice but to change the consumer alone to simply check the data for changes peridiodically:
#include <Windows.h>
int main()
{
HANDLE mutex = CreateMutex(NULL, 0, "name");
if (!mutex) return -1;
while(true)
{
if (WaitForSingleObject(mutex, 1000) == WAIT_OBJECT_0)
{
// check data for changes
// process new data as needed
// cache results for next time...
ReleaseMutex(mutex);
}
}
return 0;
}
Tricky. I'm going to answer the underlying question: when is the memory written?
This can be observed via a four step solution:
Inject a DLL in the watched process
Add a vectored exception handler for STATUS_GUARD_PAGE_VIOLATION
Set the guard page bit on the 2 MB memory range (finding it could be a challenge)
From the vectored exception handler, inform your process and re-establish the guard bit (it's one-shot)
You may need only a single guard page if the image is always fully rewritten.

win32 thread-safe queue implementation using native windows API

Because the lack of condition variable in windows(though it is introduced since vista, it's not supported in windows XP and 2003), it is not very easy to implement a thread-safe queue in c++. Strategies for Implementing POSIX Condition Variables on Win32. What I required is to just use CriticalSection or Mutex and Event without using semaphore and condition variable.
I also tried to find an exact implementation that just using win32 native API, but no luck. So I finished one by myself. The problem is I am not 100% sure the code is thread-safe. Who can tell me it is OK or not?
class CEventSyncQueue
{
public:
CEventSyncQueue(int nCapacity = -1);
virtual ~CEventSyncQueue();
virtual void Put(void* ptr);
virtual void* Get();
protected:
int m_nCapacity;
CPtrList m_list;
CRITICAL_SECTION m_lock;
HANDLE m_hGetEvent;
HANDLE m_hPutEvent;
};
CEventSyncQueue::CEventSyncQueue(int nCapacity)
{
m_nCapacity = nCapacity;
::InitializeCriticalSection(&m_lock);
m_hPutEvent = ::CreateEvent(NULL, FALSE, FALSE, NULL);
m_hGetEvent = ::CreateEvent(NULL, FALSE, FALSE, NULL);
}
CEventSyncQueue::~CEventSyncQueue()
{
m_list.RemoveAll();
::CloseHandle(m_hGetEvent);
::CloseHandle(m_hPutEvent);
::DeleteCriticalSection(&m_lock);
}
void CEventSyncQueue::Put(void* ptr)
{
::EnterCriticalSection(&m_lock);
while(m_nCapacity > 0 && m_list.GetCount() >= m_nCapacity)
{
::LeaveCriticalSection(&m_lock);
//wait
if(::WaitForSingleObject(m_hPutEvent, INFINITE) != WAIT_OBJECT_0)
{
ASSERT(FALSE);
}
::EnterCriticalSection(&m_lock);
}
if(m_nCapacity > 0)
{
ASSERT(m_list.GetCount() < m_nCapacity);
}
m_list.AddTail(ptr);
::SetEvent(m_hGetEvent); //notifyAll
::LeaveCriticalSection(&m_lock);
}
void* CEventSyncQueue::Get()
{
::EnterCriticalSection(&m_lock);
while(m_list.IsEmpty())
{
::LeaveCriticalSection(&m_lock);
//wait
if(::WaitForSingleObject(m_hGetEvent, INFINITE) != WAIT_OBJECT_0)
{
ASSERT(FALSE);
}
::EnterCriticalSection(&m_lock);
}
ASSERT(!m_list.IsEmpty());
void* ptr = m_list.RemoveHead();
::SetEvent(m_hPutEvent); //notifyAll
::LeaveCriticalSection(&m_lock);
return ptr;
}
It's trivial to implement a thread-safe queue in Windows. I've done it in Delphi, C++, BCB etc.
Why do you think that a condition variable is required? How do you think that Windows Message Queues work?
Events are the wrong primitive to use for P-C queues. Easiest/clearest way is to use a semaphore.
Simple unbounded producer-consumer queue.
template <typename T> class PCSqueue{
CRITICAL_SECTION access;
deque<T> *objectQueue;
HANDLE queueSema;
public:
PCSqueue(){
objectQueue=new deque<T>;
InitializeCriticalSection(&access);
queueSema=CreateSemaphore(NULL,0,MAXINT,NULL);
};
void push(T ref){
EnterCriticalSection(&access);
objectQueue->push_front(ref);
LeaveCriticalSection(&access);
ReleaseSemaphore(queueSema,1,NULL);
};
bool pop(T *ref,DWORD timeout){
if (WAIT_OBJECT_0==WaitForSingleObject(queueSema,timeout)) {
EnterCriticalSection(&access);
*ref=objectQueue->back();
objectQueue->pop_back();
LeaveCriticalSection(&access);
return(true);
}
else
return(false);
};
};
Edit - a bounded queue would not be much more difficult - you need another semaphre to count the empty spaces. I don't use bounded queues, but I'm sure it would be OK - a bounded queue with 2 semaphores and a mutex/CS is s standard pattern.
Edit: Use PostMessage() or PostThreadMessage() API calls - they are explicitly declared to be safe from the 'waveOutProc' callback. MSDN says that calling 'other wave functions' will cause deadlock - semaphore calls are not in that set and I would be very surprised indeed if SetEvent() was allowed but ReleaseSemaphore() was not. In fact, I would be surprised if SetEvent() was allowed while ReleaseSemaphore() was not ANYWHERE in Windows.
On second thoughts, it's hardly necessary to explicitly implement a semaphore. Instead, just think about how you would implement a semaphore using events, and approach your the problem that way. My first attempt used manual-reset events, which was inefficient but manifestly correct, and then I optimized.
Please note that I haven't debugged (or even compiled!) either of these code fragments, but they should give you the right idea. Here's the manual-reset version:
class CEventSyncQueue
{
public:
CEventSyncQueue(int nCapacity = -1);
virtual ~CEventSyncQueue();
virtual void Put(void* ptr);
virtual void* Get();
protected:
int m_nCapacity;
CPtrList m_list;
CRITICAL_SECTION m_lock;
HANDLE m_queue_not_empty;
HANDLE m_queue_not_full;
};
CEventSyncQueue::CEventSyncQueue(int nCapacity)
{
m_nCapacity = nCapacity;
::InitializeCriticalSection(&m_lock);
m_queue_not_empty = ::CreateEvent(NULL, TRUE, FALSE, NULL);
m_queue_not_full = ::CreateEvent(NULL, TRUE, TRUE, NULL);
}
CEventSyncQueue::~CEventSyncQueue()
{
m_list.RemoveAll();
::CloseHandle(m_queue_not_empty);
::CloseHandle(m_queue_not_full);
::DeleteCriticalSection(&m_lock);
}
void CEventSyncQueue::Put(void* ptr)
{
bool done = false;
while (!done)
{
// If the queue is full, we must wait until it isn't.
if(::WaitForSingleObject(m_queue_not_full, INFINITE) != WAIT_OBJECT_0)
{
ASSERT(FALSE);
}
// However, we might not be the first to respond to the event,
// so we still need to check whether the queue is full and loop
// if it is.
::EnterCriticalSection(&m_lock);
if (m_nCapacity <= 0 || m_list.GetCount() < m_nCapacity)
{
m_list.AddTail(ptr);
done = true;
// The queue is definitely not empty.
SetEvent(m_queue_not_empty);
// Check whether the queue is now full.
if (m_nCapacity > 0 && m_list.GetCount() >= m_nCapacity)
{
ResetEvent(m_queue_not_full);
}
}
::LeaveCriticalSection(&m_lock);
}
}
void* CEventSyncQueue::Get()
{
void *result = nullptr;
while (result == nullptr)
{
// If the queue is empty, we must wait until it isn't.
if(::WaitForSingleObject(m_queue_not_empty, INFINITE) != WAIT_OBJECT_0)
{
ASSERT(FALSE);
}
// However, we might not be the first to respond to the event,
// so we still need to check whether the queue is empty and loop
// if it is.
::EnterCriticalSection(&m_lock);
if (!m_list.IsEmpty())
{
result = m_list.RemoveHead();
ASSERT(result != nullptr);
// The queue shouldn't be full at this point!
ASSERT(m_nCapacity <= 0 || m_list.GetCount() < m_nCapacity);
SetEvent(m_queue_not_full);
// Check whether the queue is now empty.
if (m_list.IsEmpty())
{
ResetEvent(m_queue_not_empty);
}
}
::LeaveCriticalSection(&m_lock);
}
}
And here's the more efficient, auto-reset events version:
class CEventSyncQueue
{
public:
CEventSyncQueue(int nCapacity = -1);
virtual ~CEventSyncQueue();
virtual void Put(void* ptr);
virtual void* Get();
protected:
int m_nCapacity;
CPtrList m_list;
CRITICAL_SECTION m_lock;
HANDLE m_queue_not_empty;
HANDLE m_queue_not_full;
};
CEventSyncQueue::CEventSyncQueue(int nCapacity)
{
m_nCapacity = nCapacity;
::InitializeCriticalSection(&m_lock);
m_queue_not_empty = ::CreateEvent(NULL, FALSE, FALSE, NULL);
m_queue_not_full = ::CreateEvent(NULL, FALSE, TRUE, NULL);
}
CEventSyncQueue::~CEventSyncQueue()
{
m_list.RemoveAll();
::CloseHandle(m_queue_not_empty);
::CloseHandle(m_queue_not_full);
::DeleteCriticalSection(&m_lock);
}
void CEventSyncQueue::Put(void* ptr)
{
if (m_nCapacity <= 0)
{
::EnterCriticalSection(&m_lock);
m_list.AddTail(ptr);
SetEvent(m_queue_not_empty);
::LeaveCriticalSection(&m_lock);
return;
}
bool done = false;
while (!done)
{
// If the queue is full, we must wait until it isn't.
if(::WaitForSingleObject(m_queue_not_full, INFINITE) != WAIT_OBJECT_0)
{
ASSERT(FALSE);
}
// However, under some (rare) conditions we'll get here and find
// the queue is already full again, so be prepared to loop.
::EnterCriticalSection(&m_lock);
if (m_list.GetCount() < m_nCapacity)
{
m_list.AddTail(ptr);
done = true;
SetEvent(m_queue_not_empty);
if (m_list.GetCount() < m_nCapacity)
{
SetEvent(m_queue_not_full);
}
}
::LeaveCriticalSection(&m_lock);
}
}
void* CEventSyncQueue::Get()
{
void *result = nullptr;
while (result == nullptr)
{
// If the queue is empty, we must wait until it isn't.
if(::WaitForSingleObject(m_queue_not_empty, INFINITE) != WAIT_OBJECT_0)
{
ASSERT(FALSE);
}
// However, under some (rare) conditions we'll get here and find
// the queue is already empty again, so be prepared to loop.
::EnterCriticalSection(&m_lock);
if (!m_list.IsEmpty())
{
result = m_list.RemoveHead();
ASSERT(result != nullptr);
// The queue shouldn't be full at this point!
if (m_nCapacity <= 0) ASSERT(m_list.GetCount() < m_nCapacity);
SetEvent(m_queue_not_full);
if (!m_list.IsEmpty())
{
SetEvent(m_queue_not_empty);
}
}
::LeaveCriticalSection(&m_lock);
}
}
condition variable? Do you mean Interlocked* functions? These have been around for a long time - I used them in Windows 2000. you can use them to build a concurrency system, but you'll still have to do a bit of work yourself.
Alternatively, try OpenMP. To use this you'll need Visual Studio 2008 or greater.