QNX pthread_mutex_lock causing deadlock error ( 45 = EDEADLK ) - c++

I am implementing an asynchronous log writing mechanism for my project's multithreaded application. Below is the partial code of the part where the error occurs.
void CTraceFileWriterThread::run()
{
bool fShoudIRun = shouldThreadsRun(); // Some global function which decided if operations need to stop. Not really relevant here. Assume "true" value.
while(fShoudIRun)
{
std::string nextMessage = fetchNext();
if( !nextMessage.empty() )
{
process(nextMessage);
}
else
{
fShoudIRun = shouldThreadsRun();
condVarTraceWriter.wait();
}
}
}
//This is the consumer. This is in my thread with lower priority
std::string CTraceFileWriterThread::fetchNext()
{
// When there are a lot of logs, I mean A LOT, I believe the
// control stays in this function for a long time and an other
// thread calling the "add" function is not able to acquire the lock
// since its held here.
std::string message;
if( !writeQueue.empty() )
{
writeQueueMutex.lock(); // Obj of our wrapper around pthread_mutex_lock
message = writeQueue.front();
writeQueue.pop(); // std::queue
writeQueueMutex.unLock() ;
}
return message;
}
// This is the producer and is called from multiple threads.
void CTraceFileWriterThread::add( std::string outputString ) {
if ( !outputString.empty() )
{
// crashes here while trying to acquire the lock when there are lots of
// logs in prod systems.
writeQueueMutex.lock();
const size_t writeQueueSize = writeQueue.size();
if ( writeQueueSize == maximumWriteQueueCapacity )
{
outputString.append ("\n queue full, discarding traces, traces are incomplete" );
}
if ( writeQueueSize <= maximumWriteQueueCapacity )
{
bool wasEmpty = writeQueue.empty();
writeQueue.push(outputString);
condVarTraceWriter.post(); // will be waiting in a function which calls "fetchNext"
}
writeQueueMutex.unLock();
}
int wrapperMutex::lock() {
//#[ operation lock()
int iRetval;
int iRetry = 10;
do
{
//
iRetry--;
tRfcErrno = pthread_mutex_lock (&tMutex);
if ( (tRfcErrno == EINTR) || (tRfcErrno == EAGAIN) )
{
iRetval = RFC_ERROR;
(void)sched_yield();
}
else if (tRfcErrno != EOK)
{
iRetval = RFC_ERROR;
iRetry = 0;
}
else
{
iRetval = RFC_OK;
iRetry = 0;
}
} while (iRetry > 0);
return iRetval;
//#]
}
I generated the core dump and analysed it with GDB and here are some findings
Program terminated with signal 11, Segmentation fault.
"Errno=45" at the add function where I am trying to acquire the lock. The wrapper we have around pthread_mutex_lock tries to acquire the lock for around 10 times before it gives up.
The code works fine when there are fewer logs. Also, we do not have C++11 or further and hence restricted to mutex of QNX. Any help is appreciated as I am looking at this issue for over a month with little progress. Please ask if anymore info is required.

Related

Wait until a variable becomes zero

I'm writing a multithreaded program that can execute some tasks in separate threads.
Some operations require waiting for them at the end of execution of my program. I've written simple guard for such "important" operations:
class CPendingOperationGuard final
{
public:
CPendingOperationGuard()
{
InterlockedIncrementAcquire( &m_ullCounter );
}
~CPendingOperationGuard()
{
InterlockedDecrementAcquire( &m_ullCounter );
}
static bool WaitForAll( DWORD dwTimeOut )
{
// Here is a topic of my question
// Return false on timeout
// Return true if wait was successful
}
private:
static volatile ULONGLONG m_ullCounter;
};
Usage is simple:
void ImportantTask()
{
CPendingOperationGuard guard;
// Do work
}
// ...
void StopExecution()
{
if(!CPendingOperationGuard::WaitForAll( 30000 )) {
// Handle error
}
}
The question is: how to effectively wait until a m_ullCounter becames zero or until timeout.
I have two ideas:
To launch this function in another separate thread and write WaitForSingleObject( hThread, dwTimeout ):
DWORD WINAPI WaitWorker( LPVOID )
{
while(InterlockedCompareExchangeRelease( &m_ullCounter, 0, 0 ))
;
}
But it will "eat" almost 100% of CPU time - bad idea.
Second idea is to allow other threads to start:
DWORD WINAPI WaitWorker( LPVOID )
{
while(InterlockedCompareExchangeRelease( &m_ullCounter, 0, 0 ))
Sleep( 0 );
}
But it'll switch execution context into kernel mode and back - too expensive in may task. Bad idea too
The question is:
How to perform almost-zero-overhead waiting until my variable becames zero? Maybe without separate thread... The main condition is to support stopping of waiting by timeout.
Maybe someone can suggest completely another idea for my task - to wait for all registered operations (like in WinAPI's ThreadPools - its API has, for instance, WaitForThreadpoolWaitCallbacks to perform waiting for ALL registered tasks).
PS: it is not possible to rewrite my code with ThreadPool API :(
Have a look at the WaitOnAddress() and WakeByAddressSingle()/WakeByAddressAll() functions introduced in Windows 8.
For example:
class CPendingOperationGuard final
{
public:
CPendingOperationGuard()
{
InterlockedIncrementAcquire(&m_ullCounter);
Wake­By­Address­All(&m_ullCounter);
}
~CPendingOperationGuard()
{
InterlockedDecrementAcquire(&m_ullCounter);
Wake­By­Address­All(&m_ullCounter);
}
static bool WaitForAll( DWORD dwTimeOut )
{
ULONGLONG Captured, Now, Deadline = GetTickCount64() + dwTimeOut;
DWORD TimeRemaining;
do
{
Captured = InterlockedExchangeAdd64((LONG64 volatile *)&m_ullCounter, 0);
if (Captured == 0) return true;
Now = GetTickCount64();
if (Now >= Deadline) return false;
TimeRemaining = static_cast<DWORD>(Deadline - Now);
}
while (WaitOnAddress(&m_ullCounter, &Captured, sizeof(ULONGLONG), TimeRemaining));
return false;
}
private:
static volatile ULONGLONG m_ullCounter;
};
Raymond Chen wrote a series of blog articles about these functions:
WaitOnAddress lets you create a synchronization object out of any data variable, even a byte
Implementing a critical section in terms of WaitOnAddress
Spurious wakes, race conditions, and bogus FIFO claims: A peek behind the curtain of WaitOnAddress
Extending our critical section based on WaitOnAddress to support timeouts
Comparing WaitOnAddress with futexes (futexi? futexen?)
Creating a semaphore from WaitOnAddress
Creating a semaphore with a maximum count from WaitOnAddress
Creating a manual-reset event from WaitOnAddress
Creating an automatic-reset event from WaitOnAddress
A helper template function to wait for WaitOnAddress in a loop
you need for this task something like Run-Down Protection instead CPendingOperationGuard
before begin operation, you call ExAcquireRundownProtection and only if it return TRUE - begin execute operation. at the end you must call ExReleaseRundownProtection
so pattern must be next
if (ExAcquireRundownProtection(&RunRef)) {
do_operation();
ExReleaseRundownProtection(&RunRef);
}
when you want stop this process and wait for all active calls do_operation(); finished - you call ExWaitForRundownProtectionRelease (instead WaitWorker)
After ExWaitForRundownProtectionRelease is called, the ExAcquireRundownProtection routine will return FALSE (so new operations will not start after this). ExWaitForRundownProtectionRelease waits to return until all calls the ExReleaseRundownProtection routine to release the previously acquired run-down protection (so when all current(if exist) operation complete). When all outstanding accesses are completed, ExWaitForRundownProtectionRelease returns
unfortunately this api implemented by system only in kernel mode and no analog in user mode. however not hard implement such idea yourself
this is my example:
enum RundownState {
v_complete = 0, v_init = 0x80000000
};
template<typename T>
class RundownProtection
{
LONG _Value;
public:
_NODISCARD BOOL IsRundownBegin()
{
return 0 <= _Value;
}
_NODISCARD BOOL AcquireRP()
{
LONG Value, NewValue;
if (0 > (Value = _Value))
{
do
{
NewValue = InterlockedCompareExchangeNoFence(&_Value, Value + 1, Value);
if (NewValue == Value) return TRUE;
} while (0 > (Value = NewValue));
}
return FALSE;
}
void ReleaseRP()
{
if (InterlockedDecrement(&_Value) == v_complete)
{
static_cast<T*>(this)->RundownCompleted();
}
}
void Rundown_l()
{
InterlockedBitTestAndResetNoFence(&_Value, 31);
}
void Rundown()
{
if (AcquireRP())
{
Rundown_l();
ReleaseRP();
}
}
RundownProtection(RundownState Value = v_init) : _Value(Value)
{
}
void Init()
{
_Value = v_init;
}
};
///////////////////////////////////////////////////////////////
class OperationGuard : public RundownProtection<OperationGuard>
{
friend RundownProtection<OperationGuard>;
HANDLE _hEvent;
void RundownCompleted()
{
SetEvent(_hEvent);
}
public:
OperationGuard() : _hEvent(0) {}
~OperationGuard()
{
if (_hEvent)
{
CloseHandle(_hEvent);
}
}
ULONG WaitComplete(ULONG dwMilliseconds = INFINITE)
{
return WaitForSingleObject(_hEvent, dwMilliseconds);
}
ULONG Init()
{
return (_hEvent = CreateEvent(0, 0, 0, 0)) ? NOERROR : GetLastError();
}
} g_guard;
//////////////////////////////////////////////
ULONG CALLBACK PendingOperationThread(void*)
{
while (g_guard.AcquireRP())
{
Sleep(1000);// do operation
g_guard.ReleaseRP();
}
return 0;
}
void demo()
{
if (g_guard.Init() == NOERROR)
{
if (HANDLE hThread = CreateThread(0, 0, PendingOperationThread, 0, 0, 0))
{
CloseHandle(hThread);
}
MessageBoxW(0, 0, L"UI Thread", MB_ICONINFORMATION|MB_OK);
g_guard.Rundown();
g_guard.WaitComplete();
}
}
why simply wait when wait until a m_ullCounter became zero not enough
if we read 0 from m_ullCounter this mean only at this time no active operation. but pending operation can begin already after we check that m_ullCounter == 0 . we can use special flag (say bool g_bQuit) and set it. operation before begin check this flag and not begin if it true. but this anyway not enough
naive code:
//worker thread
if (!g_bQuit) // (1)
{
// MessageBoxW(0, 0, L"simulate delay", MB_ICONWARNING);
InterlockedIncrement(&g_ullCounter); // (4)
// do operation
InterlockedDecrement(&g_ullCounter); // (5)
}
// here we wait for all operation done
g_bQuit = true; // (2)
// wait on g_ullCounter == 0, how - not important
while (g_ullCounter) continue; // (3)
pending operation checking g_bQuit flag (1) - it yet false, so it
begin
worked thread is swapped (use MessageBox for simulate this)
we set g_bQuit = true; // (2)
we check/wait for g_ullCounter == 0, it 0 so we exit (3)
working thread wake (return from MessageBox) and increment
g_ullCounter (4)
problem here that operation can use some resources which we already begin destroy after g_ullCounter == 0
this happens because check quit flag (g_Quit) and increment counter after this not atomic - can be a gap between them.
for correct solution we need atomic access to flag+counter. this and do rundown protection. for flag+counter used single LONG variable (32 bit) because we can do atomic access to it. 31 bits used for counter and 1 bits used for quit flag. windows solution use 0 bit for flag (1 mean quit) and [1..31] bits for counter. i use the [0..30] bits for counter and 31 bit for flag (0 mean quit). look for

How to determine which thread is done

I have a loop which calls pthread_join but the order of the loop does not match the order of thread's termination.
how can i monitor thread completion then call join?
for ( int th=0; th<sections; th++ )
{
cout<<"start joining "<<th<<endl<<flush;
result_code = pthread_join( threads[th] , (void**)&status);
cout<<th<<" join error "<<strerror(result_code)<<endl<<flush;
cout<<"Join status is "<<status<<endl<<flush;
}
This is my solution, which seems to maximize multi-threading throughput by serving the first
done thread . This solution does not depend on pthread_join loop order.
// loop & wait for the first done thread
std::bitset<Nsections> ready;
std::bitset<Nsections> done;
ready.reset();
for (unsigned b=0; b<sections; b++) ready.flip(b);
done = ready;
unsigned currIdx = 1;
int th = 0;
int th_= 0;
int stat;
while ( done.any() )
{
// main loops waiting for 1st thread to complete.
// completion is checked by global vector
// vStatus (singlton write protected)
// and not by pthread_exit returned value,
// in ordder to maximize throughput by
// post processig the first
// finished thread.
if ( (obj.vStatus).empty() ) { Sleep (5); continue; }
while ( ready.any() )
{
if ( sections == 1 ) break;
if ( !(obj.vStatus).empty() )
{
if ( currIdx <= (obj.vStatus).size() )
{
th_ = currIdx-1;
std::string s =
ready.to_string<char,std::string::traits_type,std::string::allocator_type>();
cout<<"checking "<<th_<<"\t"<<s<<"\t"
<<(ready.test(th_)?"T":"F")<<"\t"<<(obj.vStatus)[th_].retVal <<endl;
if ((obj.vStatus)[th_].retVal < 1)
{
if (ready.test(th_))
{
th=th_;
ready.reset(th);
goto retry;
}
}
}
}
Sleep (2);
} // while ready
retry:
cout<<"start joining "<<th<<endl<<flush;
result_code = pthread_join( threads[th] , (void**)&status);
switch (result_code)
{
case EDEADLK: goto retry; break;
case EINVAL:
case ESRCH:
case 0:
currIdx++;
stat = status->retVal;
free (status);
done.reset(th);
std::string s =
done.to_string<char,std::string::traits_type,std::string::allocator_type>();
cout<<"joined thread "<<th<<"\t"<<s<<"\t"
<<(done.test(th)?"T":"F")<<"\t"<<stat <<endl;
while (true)
{
auto ret=pthread_cancel ( threads[th] ) ;
if (ret == ESRCH) { netTH--; break; }
Sleep (20);
}
break;
}
How can I monitor thread completion then call join ?
By letting join detect the completion. (i.e. do nothing special)
I have a loop which calls pthread_join but the order of the loop does not match the order of thread's termination.
The order of the loop does not matter.
a) thread[main] calling thread[1].'join' will simply be suspended until thread[1] exits. After that, thread[main] will be allowed to continue with the rest of the loop.
b) When thread[2] terminates before thread[1], thread[main] calling thread[2].join simply returns immediately. Again, thread[main] continues.
c) The effort to ensure thread[1] terminates prior to thread[2] (to match the loop sequence) is a surprisingly time consuming effort, with no benefit.
Update in progress ... looking for code I thought I have already submitted.

Sleeping thread and thread initialization inside constructor

Im trying to make a thread run out of a ctor , the thread should sleep , wake up and then perform a buffer dump and then sleep again and so on this is the code for the ctor:
Logger::Logger()
{
BufferInUse = &CyclicBuffer1; //buffer 1 will be used at beggining
MaxBufferSize = 5; //initial state
NumOfCycles = 0;
CurrentMaxStringLength = 0;
position = BufferInUse->end();
OutPutMethod = odBuffer; //by default
Thresh = 1; //by default
hTimer = CreateWaitableTimer(NULL, TRUE, NULL);
EventTime.QuadPart = -20000000; //1 second by default
Mutex = CreateMutex(NULL,FALSE,NULL);
if (Mutex == NULL)
{
OutputDebugStringA("CreateMutex error! the Logger will close \n");
return ;
}
_beginthread( Logger::WorkerThread , 0,(void*)this ); //run the thread
}
when I debug it , it takes lots of time for the thread to even be created and finish the ctor function but in that time my object member functions get called lots of times (i see it when debugging).
1.I want the thread to be created before my member functions get called, what is the best way to achieve that?
now my thread implementation is:
void Logger::WorkerThread ( void *lpParam )
{
Logger *log = static_cast <Logger*> (lpParam);
if (NULL == log->hTimer)
{
log->LogStringToOutput("CreateWaitableTimer() failed , Logger will close \n");
return;
}
for(;;)
{
//set timer for time specified by the EventTime variable inside the Logger
if (!SetWaitableTimer(log->hTimer, & (log->EventTime), 0, NULL, NULL, 0))
{
log->LogStringToOutput("SetWaitableTimer() failed , Logger will close\n" );
_endthread();
}
//wait for timer
if (WaitForSingleObject(log->hTimer, INFINITE) != WAIT_OBJECT_0)
{
log->LogStringToOutput("WaitForSingleObject() failed! Logger will close\n");
_endthread();
return;
}
if(log->getOutputMethod() == odBuffer && log->BufferInUse->size() >= log->Thresh && !log->BufferInUse->empty())
{
TTFW_LogRet ret;
ret = log->FullBufferDump();
if (ret != SUCCESS)
{
log->LogStringToOutput("Error occured in dumping cyclic buffer , the buffer will be cleared\n");
}
}
}
}
is there more elegant implementation of this thread functionality?
you need some mechanism to synchronous WorkerThread starting and member function access.
for example, use a condition variable (documents in msdn):
add 3 member to Logger:
class Logger{
...
private:
CRITICAL_SECTION CritSection;
CONDITION_VARIABLE ConditionVar;
bool WorkerThreadStarted;
...
};
and
Logger::Logger():WorkerThreadStarted(false)
{
EnterCriticalSection(&CritSection); //added
BufferInUse = &CyclicBuffer1; //buffer 1 will be used at beggining
...
}
void Logger::WorkerThread ( void *lpParam )
{
WorkerThreadStarted=true; //added
LeaveCriticalSection(&CritSection);
Logger *log = static_cast <Logger*> (lpParam);
if (NULL == log->hTimer)
{
log->LogStringToOutput("CreateWaitableTimer() failed , Logger will close \n");
return;
}
...
}
add such a function:
void Logger::EnsureInitiallized(){
EnterCriticalSection(&CritSection);
// Wait until the predicate is TRUE
while( !WorkerThreadStarted )
{
SleepConditionVariableCS(&ConditionVar, &CritSection, INFINITE);
}
LeaveCriticalSection(&CritSection);
}
and at every member function's entry, call EnsureInitiallized();
void Logger::yourFunction(){
EnsureInitiallized();
...
}
that is a example , you can also use a read_write lock , a atomic integer etc

C++ Map Iteration and Stack Corruption

I am trying to use a system of maps to store and update data for a chat server. The application is mutlithreaded and uses a lock system to prevent multiple threads from accessing the data.
The problem is this: when a client is removed individually from the map, it is ok. However, when I try to call multiple closes, it leaves some in the memory. If I at any point call ::clear() on the map, it causes a debug assertion error with either "Iterator not compatible" or similar. The code will work the first time (tested using 80+ consoles connected as a test), but due to it leaving chunks behind, will not work again. I have tried researching ways, and I have written systems to stop the code execution until each process has completed. I appreciate any help so far, and I have attached the relevant code snippets.
//portion of server code that handles shutting down
DWORD WINAPI runserver(void *params) {
runserverPARAMS *p = (runserverPARAMS*)params;
/*Server stuff*/
serverquit = 0;
//client based cleanup
vector<int> tokill;
map<int,int>::iterator it = clientsockets.begin();
while(it != clientsockets.end()) {
tokill.push_back(it->first);
++it;
}
for(;;) {
for each (int x in tokill) {
clientquit[x] = 1;
while(clientoffline[x] != 1) {
//haulting execution until thread has terminated
}
destoryclient(x);
}
}
//client thread based cleanup complete.
return 0;
}
//clientioprelim
DWORD WINAPI clientioprelim(void* params) {
CLIENTthreadparams *inparams = (CLIENTthreadparams *)params;
/*Socket stuff*/
for(;;) {
/**/
}
else {
if(clientquit[inparams->clientid] == 1)
break;
}
}
clientoffline[inparams->clientid] = 1;
return 0;
}
int LOCKED; //exported as extern via libraries.h so it's visible to other source files
void destoryclient(int clientid) {
for(;;) {
if(LOCKED == 0) {
LOCKED = 1;
shutdown(clientsockets[clientid], 2);
closesocket(clientsockets[clientid]);
if((clientsockets.count(clientid) != 0) && (clientsockets.find(clientid) != clientsockets.end()))
clientsockets.erase(clientsockets.find(clientid));
if((clientname.count(clientid) != 0) && (clientname.find(clientid) != clientname.end()))
clientname.erase(clientname.find(clientid));
if((clientusername.count(clientid) != 0) && (clientusername.find(clientid) != clientusername.end()))
clientusername.erase(clientusername.find(clientid));
if((clientaddr.count(clientid) != 0) && (clientaddr.find(clientid) != clientaddr.end()))
clientaddr.erase(clientusername.find(clientid));
if((clientcontacts.count(clientid) != 0) && (clientcontacts.find(clientid) != clientcontacts.end()))
clientcontacts.erase(clientcontacts.find(clientid));
if((clientquit.count(clientid) != 0) && (clientquit.find(clientid) != clientquit.end()))
clientquit.erase(clientquit.find(clientid));
if((clientthreads.count(clientid) != 0) && (clientthreads.find(clientid) != clientthreads.end()))
clientthreads.erase(clientthreads.find(clientid));
LOCKED = 0;
break;
}
}
return;
}
Are you really using an int for locking or was it just a simplification of the code? If you really use an int: this won't work and the critical section can be entered twice (or more) simultaneously, if both threads check the variable before one assigns to it (simplified). See mutexes in Wikipedia for reference. You could either use some sort of mutex provided by windows or boost thread instead of the int.

Mutex can't acquire lock

I have a problem where one of my functions can't aquire the lock on one of the 2 mutexes I use.
I did a basic debug in VC++2010 , setting some breakpoints and it seems if anywhere the lock is acquired, it does get unlocked.
The code that uses mutexes is as follow:
#define SLEEP(x) { Sleep(x); }
#include<windows.h>
void Thread::BackgroundCalculator( void *unused ){
while( true ){
if(MUTEX_LOCK(&mutex_q, 5) == 1){
if(!QueueVector.empty()){
//cut
MUTEX_UNLOCK(&mutex_q);
//cut
while(MUTEX_LOCK(&mutex_p,90000) != 1){}
//cut
MUTEX_UNLOCK(&mutex_p);
}
}
SLEEP(25);
}
}
Then somwhere else:
PLUGIN_EXPORT void PLUGIN_CALL
ProcessTick(){
if(g_Ticked == g_TickMax){
if(MUTEX_LOCK(&mutex_p, 1) == 1){
if(!PassVector.empty()){
PassVector.pop();
}
MUTEX_UNLOCK(&mutex_p);
}
g_Ticked = -1;
}
g_Ticked += 1;
}
static cell AMX_NATIVE_CALL n_CalculatePath( AMX* amx, cell* params ){
if(MUTEX_LOCK(&mutex_q,1) == 1){
QueueVector.push_back(QuedData(params[1],params[2],params[3],amx));
MUTEX_UNLOCK(&mutex_q);
return 1;
}
return 0;
}
init:
PLUGIN_EXPORT bool PLUGIN_CALL Load( void **ppData ) {
MUTEX_INIT(&mutex_q);
MUTEX_INIT(&mutex_p);
START_THREAD( Thread::BackgroundCalculator, 0);
return true;
}
Some variables and functions:
int MUTEX_INIT(MUTEX *mutex){
*mutex = CreateMutex(0, FALSE, 0);
return (*mutex==0);
}
int MUTEX_LOCK(MUTEX *mutex, int Timex = -1){
if(WaitForSingleObject(*mutex, Timex) == WAIT_OBJECT_0){
return 1;
}
return 0;
}
int MUTEX_UNLOCK(MUTEX *mutex){
return ReleaseMutex(*mutex);
}
MUTEX mutex_q = NULL;
MUTEX mutex_p = NULL;
and defines:
# include <process.h>
# define OS_WINDOWS
# define MUTEX HANDLE
# include <Windows.h>
# define EXIT_THREAD() { _endthread(); }
# define START_THREAD(a, b) { _beginthread( a, 0, (void *)( b ) ); }
Thread header file:
#ifndef __THREAD_H
#define __THREAD_H
class Thread{
public:
Thread ( void );
~Thread ( void );
static void BackgroundCalculator ( void *unused );
};
#endif
Well I can't seem to find the issue.
After debugging I wanted to "force" aquiring the lock by this code (from the pawn abstract machine):
if (strcmp("/routeme", cmdtext, true) == 0){
new fromnode = NearestPlayerNode(playerid);
new start = GetTickCount();
while(CalculatePath(fromnode,14,playerid+100) == 0){
printf("0 %d",fromnode);
}
printf("1 %d",fromnode);
printf("Time: %d",GetTickCount()-start);
return 1;
}
but it keeps endless going on, CalculatePath calls static cell AMX_NATIVE_CALL n_CalculatePath( AMX* amx, cell* params )
That was a bit of surprise. Does anyone maybe see a mistake?
If you need the full source code it is available at:
http://gpb.googlecode.com/files/RouteConnector_174alpha.zip
Extra info:
PLUGIN_EXPORT bool PLUGIN_CALL Load
gets only executed at startup.
static cell AMX_NATIVE_CALLs
get only executed when called from a vitrual machine
ProcessTick()
gets executed every process tick of the application, after it has finished its own jobs it calls this one in the extensions.
For now I only tested the code on windows, but it does compile fine on linux.
Edit: removed linux code to shorten post.
From what I see your first snippet unlocks mutex based on some condition only, i.e. in pseudocode it is like:
mutex.lock ():
if some_unrelated_thing:
mutex.unlock ()
As I understand your code, this way the first snippet can in principle lock and then never unlock.
Another potential problem is that your code is ultimately exception-unsafe. Are you really able to guarantee that no exceptions happen between lock/unlock operations? Because if any uncaught exception is ever thrown, you get into a deadlock like described. I'd suggest using some sort of RAII here.
EDIT:
Untested RAII way of performing lock/unlock:
struct Lock
{
MUTEX& mutex;
bool locked;
Lock (MUTEX& mutex)
: mutex (mutex),
locked (false)
{ }
~Lock ()
{ release (); }
bool acquire (int timeout = -1)
{
if (!locked && WaitForSingleObject (mutex, timeout) == WAIT_OBJECT_0)
locked = true;
return locked;
}
int release ()
{
if (locked)
locked = ReleaseMutex (mutex);
return !locked;
}
};
Usage could be like this:
{
Lock q (mutex_q);
if (q.acquire (5)) {
if (!QueueVector.empty ()) {
q.release ();
...
}
}
}
Note that this way ~Lock always releases the mutex, whether you did that explicitly or not, whether the scope block exited normally or due to an uncaught exception.
I'm not sure if this is intended behavior, but in this code:
void Thread::BackgroundCalculator( void *unused ){
while( true ){
if(MUTEX_LOCK(&mutex_q, 5) == 1){
if(!QueueVector.empty()){
//cut
MUTEX_UNLOCK(&mutex_q);
//cut
while(MUTEX_LOCK(&mutex_p,90000) != 1){}
//cut
MUTEX_UNLOCK(&mutex_p);
}
}
SLEEP(25);
}
if the QueueVector.empty is true you are never unlocking mutex_q.