I found some code that claimed to be able to make a thread sleep for an accurate amount of time. Testing the code out, it seems to work great, however it always deadlocks after a short amount of time.
Here is the original code. I put prints before entering and leaving the critical section, and saw that sometimes it leaves or enters twice in a row. It seems to deadlock at the EnterCriticalSection call within the Wait function.
Is there a way I can modify this code to retain its functionality while not deadlocking?
//----------------------------------------------------------------
class PreciseTimer
{
public:
PreciseTimer() : mRes(0), toLeave(false), stopCounter(-1)
{
InitializeCriticalSection(&crit);
mRes = timeSetEvent(1, 0, &TimerProc, (DWORD)this,
TIME_PERIODIC);
}
virtual ~PreciseTimer()
{
mRes = timeKillEvent(mRes);
DeleteCriticalSection(&crit);
}
///////////////////////////////////////////////////////////////
// Function name : Wait
// Description : Waits for the required duration of msecs.
// : Timer resolution is precisely 1 msec
// Return type : void :
// Argument : int timeout : timeout in msecs
///////////////////////////////////////////////////////////////
void Wait(int timeout)
{
if ( timeout )
{
stopCounter = timeout;
toLeave = true;
// this will do the actual delay - timer callback shares
// same crit section
EnterCriticalSection(&crit);
LeaveCriticalSection(&crit);
}
}
///////////////////////////////////////////////////////////////
// Function name : TimerProc
// Description : Timer callback procedure that is called
// : every 1msec
// : by high resolution media timers
// Return type : void CALLBACK :
// Argument : UINT uiID :
// Argument : UINT uiMsg :
// Argument : DWORD dwUser :
// Argument : DWORD dw1 :
// Argument : DWORD dw2 :
///////////////////////////////////////////////////////////////
static void CALLBACK TimerProc(UINT uiID, UINT uiMsg, DWORD
dwUser, DWORD dw1, DWORD dw2)
{
static volatile bool entered = false;
PreciseTimer* pThis = (PreciseTimer*)dwUser;
if ( pThis )
{
if ( !entered && !pThis->toLeave ) // block section as
// soon as we can
{
entered = true;
EnterCriticalSection(&pThis->crit);
}
else if ( pThis->toLeave && pThis->stopCounter == 0 )
// leave section
// when counter
// has expired
{
pThis->toLeave = false;
entered = false;
LeaveCriticalSection(&pThis->crit);
}
else if ( pThis->stopCounter > 0 ) // if counter is set
// to anything, then
// continue to drop
// it...
--pThis->stopCounter;
}
}
private:
MMRESULT mRes;
CRITICAL_SECTION crit;
volatile bool toLeave;
volatile int stopCounter;
};
A deadlock in EnterCriticalSection() usually means that another thread called EnterCriticalSection() but never called LeaveCriticalSection().
As shown, this code is not very thread-safe (and timeSetEvent() is a threaded timer). If multiple PreciseTimer timers are running at the same time, they are using the same TimerProc() callback, and thus are sharing the same entered variable without protecting it from concurrent access. And if multiple threads call Wait() on the same PreciseTimer object at the same time, they are going to step over each other's use of the stopCounter and toLeave members, which are also not protected them from concurrent access. Even a single thread calling Wait() on a single PreciseTimer is not safe since TimerProc() runs in its own thread and stopCounter is not adequately protected.
This code is full of race conditions.
Related
I'm writing a multithreaded program that can execute some tasks in separate threads.
Some operations require waiting for them at the end of execution of my program. I've written simple guard for such "important" operations:
class CPendingOperationGuard final
{
public:
CPendingOperationGuard()
{
InterlockedIncrementAcquire( &m_ullCounter );
}
~CPendingOperationGuard()
{
InterlockedDecrementAcquire( &m_ullCounter );
}
static bool WaitForAll( DWORD dwTimeOut )
{
// Here is a topic of my question
// Return false on timeout
// Return true if wait was successful
}
private:
static volatile ULONGLONG m_ullCounter;
};
Usage is simple:
void ImportantTask()
{
CPendingOperationGuard guard;
// Do work
}
// ...
void StopExecution()
{
if(!CPendingOperationGuard::WaitForAll( 30000 )) {
// Handle error
}
}
The question is: how to effectively wait until a m_ullCounter becames zero or until timeout.
I have two ideas:
To launch this function in another separate thread and write WaitForSingleObject( hThread, dwTimeout ):
DWORD WINAPI WaitWorker( LPVOID )
{
while(InterlockedCompareExchangeRelease( &m_ullCounter, 0, 0 ))
;
}
But it will "eat" almost 100% of CPU time - bad idea.
Second idea is to allow other threads to start:
DWORD WINAPI WaitWorker( LPVOID )
{
while(InterlockedCompareExchangeRelease( &m_ullCounter, 0, 0 ))
Sleep( 0 );
}
But it'll switch execution context into kernel mode and back - too expensive in may task. Bad idea too
The question is:
How to perform almost-zero-overhead waiting until my variable becames zero? Maybe without separate thread... The main condition is to support stopping of waiting by timeout.
Maybe someone can suggest completely another idea for my task - to wait for all registered operations (like in WinAPI's ThreadPools - its API has, for instance, WaitForThreadpoolWaitCallbacks to perform waiting for ALL registered tasks).
PS: it is not possible to rewrite my code with ThreadPool API :(
Have a look at the WaitOnAddress() and WakeByAddressSingle()/WakeByAddressAll() functions introduced in Windows 8.
For example:
class CPendingOperationGuard final
{
public:
CPendingOperationGuard()
{
InterlockedIncrementAcquire(&m_ullCounter);
WakeByAddressAll(&m_ullCounter);
}
~CPendingOperationGuard()
{
InterlockedDecrementAcquire(&m_ullCounter);
WakeByAddressAll(&m_ullCounter);
}
static bool WaitForAll( DWORD dwTimeOut )
{
ULONGLONG Captured, Now, Deadline = GetTickCount64() + dwTimeOut;
DWORD TimeRemaining;
do
{
Captured = InterlockedExchangeAdd64((LONG64 volatile *)&m_ullCounter, 0);
if (Captured == 0) return true;
Now = GetTickCount64();
if (Now >= Deadline) return false;
TimeRemaining = static_cast<DWORD>(Deadline - Now);
}
while (WaitOnAddress(&m_ullCounter, &Captured, sizeof(ULONGLONG), TimeRemaining));
return false;
}
private:
static volatile ULONGLONG m_ullCounter;
};
Raymond Chen wrote a series of blog articles about these functions:
WaitOnAddress lets you create a synchronization object out of any data variable, even a byte
Implementing a critical section in terms of WaitOnAddress
Spurious wakes, race conditions, and bogus FIFO claims: A peek behind the curtain of WaitOnAddress
Extending our critical section based on WaitOnAddress to support timeouts
Comparing WaitOnAddress with futexes (futexi? futexen?)
Creating a semaphore from WaitOnAddress
Creating a semaphore with a maximum count from WaitOnAddress
Creating a manual-reset event from WaitOnAddress
Creating an automatic-reset event from WaitOnAddress
A helper template function to wait for WaitOnAddress in a loop
you need for this task something like Run-Down Protection instead CPendingOperationGuard
before begin operation, you call ExAcquireRundownProtection and only if it return TRUE - begin execute operation. at the end you must call ExReleaseRundownProtection
so pattern must be next
if (ExAcquireRundownProtection(&RunRef)) {
do_operation();
ExReleaseRundownProtection(&RunRef);
}
when you want stop this process and wait for all active calls do_operation(); finished - you call ExWaitForRundownProtectionRelease (instead WaitWorker)
After ExWaitForRundownProtectionRelease is called, the ExAcquireRundownProtection routine will return FALSE (so new operations will not start after this). ExWaitForRundownProtectionRelease waits to return until all calls the ExReleaseRundownProtection routine to release the previously acquired run-down protection (so when all current(if exist) operation complete). When all outstanding accesses are completed, ExWaitForRundownProtectionRelease returns
unfortunately this api implemented by system only in kernel mode and no analog in user mode. however not hard implement such idea yourself
this is my example:
enum RundownState {
v_complete = 0, v_init = 0x80000000
};
template<typename T>
class RundownProtection
{
LONG _Value;
public:
_NODISCARD BOOL IsRundownBegin()
{
return 0 <= _Value;
}
_NODISCARD BOOL AcquireRP()
{
LONG Value, NewValue;
if (0 > (Value = _Value))
{
do
{
NewValue = InterlockedCompareExchangeNoFence(&_Value, Value + 1, Value);
if (NewValue == Value) return TRUE;
} while (0 > (Value = NewValue));
}
return FALSE;
}
void ReleaseRP()
{
if (InterlockedDecrement(&_Value) == v_complete)
{
static_cast<T*>(this)->RundownCompleted();
}
}
void Rundown_l()
{
InterlockedBitTestAndResetNoFence(&_Value, 31);
}
void Rundown()
{
if (AcquireRP())
{
Rundown_l();
ReleaseRP();
}
}
RundownProtection(RundownState Value = v_init) : _Value(Value)
{
}
void Init()
{
_Value = v_init;
}
};
///////////////////////////////////////////////////////////////
class OperationGuard : public RundownProtection<OperationGuard>
{
friend RundownProtection<OperationGuard>;
HANDLE _hEvent;
void RundownCompleted()
{
SetEvent(_hEvent);
}
public:
OperationGuard() : _hEvent(0) {}
~OperationGuard()
{
if (_hEvent)
{
CloseHandle(_hEvent);
}
}
ULONG WaitComplete(ULONG dwMilliseconds = INFINITE)
{
return WaitForSingleObject(_hEvent, dwMilliseconds);
}
ULONG Init()
{
return (_hEvent = CreateEvent(0, 0, 0, 0)) ? NOERROR : GetLastError();
}
} g_guard;
//////////////////////////////////////////////
ULONG CALLBACK PendingOperationThread(void*)
{
while (g_guard.AcquireRP())
{
Sleep(1000);// do operation
g_guard.ReleaseRP();
}
return 0;
}
void demo()
{
if (g_guard.Init() == NOERROR)
{
if (HANDLE hThread = CreateThread(0, 0, PendingOperationThread, 0, 0, 0))
{
CloseHandle(hThread);
}
MessageBoxW(0, 0, L"UI Thread", MB_ICONINFORMATION|MB_OK);
g_guard.Rundown();
g_guard.WaitComplete();
}
}
why simply wait when wait until a m_ullCounter became zero not enough
if we read 0 from m_ullCounter this mean only at this time no active operation. but pending operation can begin already after we check that m_ullCounter == 0 . we can use special flag (say bool g_bQuit) and set it. operation before begin check this flag and not begin if it true. but this anyway not enough
naive code:
//worker thread
if (!g_bQuit) // (1)
{
// MessageBoxW(0, 0, L"simulate delay", MB_ICONWARNING);
InterlockedIncrement(&g_ullCounter); // (4)
// do operation
InterlockedDecrement(&g_ullCounter); // (5)
}
// here we wait for all operation done
g_bQuit = true; // (2)
// wait on g_ullCounter == 0, how - not important
while (g_ullCounter) continue; // (3)
pending operation checking g_bQuit flag (1) - it yet false, so it
begin
worked thread is swapped (use MessageBox for simulate this)
we set g_bQuit = true; // (2)
we check/wait for g_ullCounter == 0, it 0 so we exit (3)
working thread wake (return from MessageBox) and increment
g_ullCounter (4)
problem here that operation can use some resources which we already begin destroy after g_ullCounter == 0
this happens because check quit flag (g_Quit) and increment counter after this not atomic - can be a gap between them.
for correct solution we need atomic access to flag+counter. this and do rundown protection. for flag+counter used single LONG variable (32 bit) because we can do atomic access to it. 31 bits used for counter and 1 bits used for quit flag. windows solution use 0 bit for flag (1 mean quit) and [1..31] bits for counter. i use the [0..30] bits for counter and 31 bit for flag (0 mean quit). look for
I have a C++ app with 2 threads. The app displays a gauge on the screen, with an indicator that rotates based on an angle received via UDP socket. My problem is that the indicator should be rotating at a constant rate but it behaves like time slows down at times, and it also fast-forwards to catch up quickly at other times, with some pauses intermittently.
Each frame, the display (main) thread guards a copy of the angle from the UDP thread. The UDP thread also guards writing to the shared variable. I use a Windows CriticalSection object to guard the 'communication' between threads. The UDP packet is received at approximately the same rate as the display update. I am using Windows 7, 64 bit, with a 4-core processor.
I am using a separate python app to broadcast the UDP packet. I use the python function, time.sleep, to keep the broadcast at a constant rate.
Why does the application slow down?
Why does the application seem to fast-forward instead of snapping to the latest angle?
What is the proper fix?
EDIT: I am not 100% sure all angle values are remembered when the app seems to 'fast forward'. The app is snapping to some value (not sure if it is the 'latest') at times.
EDIT 2: per request, some code.
void App::udp_update(DWORD thread_id)
{
Packet p;
_socket.recv(p); // edit: blocks until transmission is received
{
Locker lock(_cs);
_packet = p;
}
}
void App::main_update()
{
float angle_copy = 0.f;
{
Locker lock(_cs);
angle_copy = _packet.angle;
}
draw(angle_copy); // edit: blocks until monitor refreshes
}
Thread.h
class CS
{
private:
friend Locker;
CRITICAL_SECTION _handle;
void _lock();
void _unlock();
// not implemented by design
CS(CS&);
CS& operator=(const CS&);
public:
CS();
~CS();
};
class Locker
{
private:
CS& _cs;
// not implemented by design
Locker();
Locker(const Locker&);
Locker& operator=(const Locker&);
public:
Locker(CS& c)
: _cs(c)
{
_cs._lock();
}
~Locker()
{
_cs._unlock();
}
};
class Win32ThreadPolicy
{
public:
typedef Functor<void,TYPELIST_1(DWORD)> Callback;
private:
Callback _callback;
//SECURITY_DESCRIPTOR _sec_descr;
//SECURITY_ATTRIBUTES _sec_attrib;
HANDLE _handle;
//DWORD _exitValue;
#ifdef USE_BEGIN_API
unsigned int _id;
#else // USE_BEGIN_API
DWORD _id;
#endif // USE_BEGIN_API
/*volatile*/ bool _is_joined;
#ifdef USE_BEGIN_API
static unsigned int WINAPI ThreadProc( void* lpParameter );
#else // USE_BEGIN_API
static DWORD WINAPI ThreadProc( LPVOID lpParameter );
#endif // USE_BEGIN_API
DWORD _run();
void _join();
// not implemented by design
Win32ThreadPolicy();
Win32ThreadPolicy(const Win32ThreadPolicy&);
Win32ThreadPolicy& operator=(const Win32ThreadPolicy&);
public:
Win32ThreadPolicy(Callback& func);
~Win32ThreadPolicy();
void Spawn();
void Join();
};
/// helps to manage parallel operations.
/// attempts to mimic the C++11 std::thread interface, but also passes the thread ID.
class Thread
{
public:
typedef Functor<void,TYPELIST_1(DWORD)> Callback;
typedef Win32ThreadPolicy PlatformPolicy;
private:
PlatformPolicy _platform;
/// not implemented by design
Thread();
Thread(const Thread&);
Thread& operator=(const Thread&);
public:
/// begins parallel execution of the parameter, func.
/// \param func, the function object to be executed.
Thread(Callback& func)
: _platform(func)
{
_platform.Spawn();
}
/// stops parallel execution and joins with main thread.
~Thread()
{
_platform.Join();
}
};
Thread.cpp
#include "Thread.h"
void CS::_lock()
{
::EnterCriticalSection( &_handle );
}
void CS::_unlock()
{
::LeaveCriticalSection( &_handle );
}
CS::CS()
: _handle()
{
::memset( &_handle, 0, sizeof(CRITICAL_SECTION) );
::InitializeCriticalSection( &_handle );
}
CS::~CS()
{
::DeleteCriticalSection( &_handle );
}
Win32ThreadPolicy::Win32ThreadPolicy(Callback& func)
: _handle(NULL)
//, _sec_descr()
//, _sec_attrib()
, _id(0)
, _is_joined(true)
, _callback(func)
{
}
void Win32ThreadPolicy::Spawn()
{
// for an example of managing descriptors, see:
// http://msdn.microsoft.com/en-us/library/windows/desktop/aa446595%28v=vs.85%29.aspx
//BOOL success_descr = ::InitializeSecurityDescriptor( &_sec_descr, SECURITY_DESCRIPTOR_REVISION );
//TODO: do we want to start with CREATE_SUSPENDED ?
// TODO: wrap this with exception handling
#ifdef USE_BEGIN_END
// http://msdn.microsoft.com/en-us/library/kdzttdcb%28v=vs.100%29.aspx
_handle = (HANDLE) _beginthreadex( NULL, 0, &Thread::ThreadProc, this, 0, &_id );
#else // USE_BEGIN_END
_handle = ::CreateThread( NULL, 0, &Win32ThreadPolicy::ThreadProc, this, 0, &_id );
#endif // USE_BEGIN_END
}
void Win32ThreadPolicy::_join()
{
// signal that the thread should complete
_is_joined = true;
// maybe ::WFSO is not the best solution.
// "Except that WaitForSingleObject and its big brother WaitForMultipleObjects are dangerous.
// The basic problem is that these calls can cause deadlocks,
// if you ever call them from a thread that has its own message loop and windows."
// http://marc.durdin.net/2012/08/waitforsingleobject-why-you-should-never-use-it/
//
// He advises to use MsgWaitForMultipleObjects instead:
// http://msdn.microsoft.com/en-us/library/windows/desktop/ms684242%28v=vs.85%29.aspx
DWORD result = ::WaitForSingleObject( _handle, INFINITE );
// _handle must have THREAD_QUERY_INFORMATION security access enabled to use the following:
//DWORD exitCode = 0;
//BOOL success = ::GetExitCodeThread( _handle, &_exitValue );
}
Win32ThreadPolicy::~Win32ThreadPolicy()
{
}
void Win32ThreadPolicy::Join()
{
if( !_is_joined )
{
_join();
}
// this example shows that it is correct to pass the handle returned by CreateThread
// http://msdn.microsoft.com/en-us/library/windows/desktop/ms682516%28v=vs.85%29.aspx
::CloseHandle( _handle );
_handle = NULL;
}
DWORD Win32ThreadPolicy::_run()
{
// TODO: do we need to make sure _id has been assigned?
while( !_is_joined )
{
_callback(_id);
::Sleep(0);
}
// TODO: what should we return?
return 0;
}
#ifdef USE_BEGIN_END
unsigned int WINAPI Thread::ThreadProc( LPVOID lpParameter )
#else // USE_BEGIN_END
DWORD WINAPI Win32ThreadPolicy::ThreadProc( LPVOID lpParameter )
#endif // USE_BEGIN_END
{
Win32ThreadPolicy* tptr = static_cast<Win32ThreadPolicy*>( lpParameter );
tptr->_is_joined = false;
// when this function (ThreadProc) returns, ::ExitThread is used to terminate the thread with an "implicit" call.
// http://msdn.microsoft.com/en-us/library/windows/desktop/ms682453%28v=vs.85%29.aspx
return tptr->_run();
}
I know this is a bit in the assumption space but:
The rate you are talking about is set in "server" and "client" via a sleep that controls the speed with which the packets are sent. This is not necessarily the rate of actual transmission, as the OS can schedule your processes in a very asymmetric way (time wise).
This can mean that when the server gets more time, it will fill an OS buffer with packets (the client will get less processor time, thus, consumming at a lower rate => slowing down the meter). Then, when the client gets more time that the server, it will consume fast all packets, while the update thread will still do some waiting. But this doesn't mean it will "snap", because you are using a critical section to lock the packet update, so probably you don't get to consume too many packages from the OS buffer until a new update. (you may have a "snap to", but with a small step). I am basing this on the fact that i see no actual sleeping in your receive or update methods (the only sleep is done on server side).
I am working with a application where the requirement is execute a function after every 100ms.
Below is my code
checkOCIDs()
{
// Do something that might take more than 100ms of time
}
void TimeOut_CallBack(int w)
{
struct itimerval tout_val;
int ret = 0;
signal(SIGALRM,TimeOut_CallBack);
/* Configure the timer to expire after 100000 ... */
tout_val.it_value.tv_sec = 0;
tout_val.it_value.tv_usec = 100000; /* 100000 timer */
/* ... and every 100 msec after that. */
tout_val.it_interval.tv_sec = 0 ;
tout_val.it_interval.tv_usec = 100000;
checkOCIDs();
setitimer(ITIMER_REAL, &tout_val,0);
return ;
}
Function TimeOut_CallBack ( ) is called only once and then on checkOCIDs( ) function must be executed after a wait of 100ms continuously.
Currently, The application is going for a block as checkOCIDs( ) function takes more than 100ms of time to complete and before that the Timer Out is triggered.
I do not wish to use while(1) with sleep( ) / usleep( ) as it eats up my CPU enormously.
Please suggest a alternative to achieve my requirement.
It is not clear whether the "check" function should be executed while it is in progress and timer expires. Maybe it would be ok to you to introduce variable to indicate that timer expired and your function should be executed again after it completes, pseudo-code:
static volatile bool check_in_progress = false;
static volatile bool timer_expired = false;
void TimeOut_CallBack(int w)
{
// ...
if (check_in_progress) {
timer_expired = true;
return;
}
// spawn/resume check function thread
// ...
}
void checkThreadProc()
{
check_in_progress = true;
do {
timer_expired = false;
checkOCIDs();
} while(timer_expired);
check_in_progress = false;
// end thread or wait for a signal to resume
}
Note, that additional synchronization may be required to avoid race conditions (for instance when one thread exists do-while loop and check_in_progress is still set and the other sets timer_expired, check function will not be executed), but that's depends on your requirements details.
I am having some issue with a process that is being launched by std::async.
class BaseClass {
public:
BaseClass() {enabledFlag = false;}
virtual ~BaseClass() {}
protected:
int process();
bool enabledFlag;
};
int BaseClass::process() {
int rc = -1;
if (enabledFlag == false) {
std::cout << "Not enabled\n" << std::flush;
return rc;
}
rc = 0;
while (enabledFlag) {
// this loop should set rc to be something other than zero if an error is to be signalled
// otherwise loop here doing stuff until the user sets enabledFlag=false
}
return rc;
}
class DerivedClassWithExposedMembersForTesting : public BaseClass {
public:
using BaseClass::enabledFlag;
using BaseClass::process;
};
In my Google Test test:
TEST(FixtureName, process_exitsWithRC0_WhenEnabledFlagSetTrueDuringExecution {
DerivedClassWithExposedMembersForTesting testClass;
testClass.enabledFlag = true;
// print status
std::cout << "Enabled: " << testClass.enabledFlag << std::endl << std::flush;
std::future<int> returnCodeFuture = std::async(std::launch::async, &DerivedClassWithExposedMembersForTesting::process, &testClass); // starts background execution
// set flag to false to kill loop
testClass.enabledFlag = false;
int rc = returnCodeFuture.get();
EXPECT_EQ(0, rc);
}
My understanding of std::async is that it should be scheduled to run shortly after the call to async, and the main thread of execution will block at the get() call if the thread hasn't finished. The call to get() will return the return value of process().
process() is set to not run if the testClass is not enabled, hence I am enabling it in the test.
I expect to see:
Enabled: 1
// test passes
What I see is:
Enabled: 1
Not enabled
// test fails
Failure
Value of: rc
Actual: -1
Expected: 0
Why is the process triggered by std::async not seeing the value of enabledFlag that is set by the main process prior to the async call being made?
Note: enabledFlag is supposed to be set from an external process, not generally from within the loop, hence this construction
** Update **
As per my comment, I fixed it by adding the following line to the test, just after the call to async():
// Use wait_for() with zero milliseconds to check thread status; delay until it has started
while (returnCodeFuture.wait_for(std::chrono::milliseconds(0)) == std::future_status::deferred) {}
The problem is that you don't know when the thread will run. It could be that you simply set the flag to false before the thread actually runs.
One simple way of solving this is to have another state variable, isRunning, that the thread sets inside the loop. Your main thread can check for this to know when the thread is running, and then tell it to end.
Im trying to make a thread run out of a ctor , the thread should sleep , wake up and then perform a buffer dump and then sleep again and so on this is the code for the ctor:
Logger::Logger()
{
BufferInUse = &CyclicBuffer1; //buffer 1 will be used at beggining
MaxBufferSize = 5; //initial state
NumOfCycles = 0;
CurrentMaxStringLength = 0;
position = BufferInUse->end();
OutPutMethod = odBuffer; //by default
Thresh = 1; //by default
hTimer = CreateWaitableTimer(NULL, TRUE, NULL);
EventTime.QuadPart = -20000000; //1 second by default
Mutex = CreateMutex(NULL,FALSE,NULL);
if (Mutex == NULL)
{
OutputDebugStringA("CreateMutex error! the Logger will close \n");
return ;
}
_beginthread( Logger::WorkerThread , 0,(void*)this ); //run the thread
}
when I debug it , it takes lots of time for the thread to even be created and finish the ctor function but in that time my object member functions get called lots of times (i see it when debugging).
1.I want the thread to be created before my member functions get called, what is the best way to achieve that?
now my thread implementation is:
void Logger::WorkerThread ( void *lpParam )
{
Logger *log = static_cast <Logger*> (lpParam);
if (NULL == log->hTimer)
{
log->LogStringToOutput("CreateWaitableTimer() failed , Logger will close \n");
return;
}
for(;;)
{
//set timer for time specified by the EventTime variable inside the Logger
if (!SetWaitableTimer(log->hTimer, & (log->EventTime), 0, NULL, NULL, 0))
{
log->LogStringToOutput("SetWaitableTimer() failed , Logger will close\n" );
_endthread();
}
//wait for timer
if (WaitForSingleObject(log->hTimer, INFINITE) != WAIT_OBJECT_0)
{
log->LogStringToOutput("WaitForSingleObject() failed! Logger will close\n");
_endthread();
return;
}
if(log->getOutputMethod() == odBuffer && log->BufferInUse->size() >= log->Thresh && !log->BufferInUse->empty())
{
TTFW_LogRet ret;
ret = log->FullBufferDump();
if (ret != SUCCESS)
{
log->LogStringToOutput("Error occured in dumping cyclic buffer , the buffer will be cleared\n");
}
}
}
}
is there more elegant implementation of this thread functionality?
you need some mechanism to synchronous WorkerThread starting and member function access.
for example, use a condition variable (documents in msdn):
add 3 member to Logger:
class Logger{
...
private:
CRITICAL_SECTION CritSection;
CONDITION_VARIABLE ConditionVar;
bool WorkerThreadStarted;
...
};
and
Logger::Logger():WorkerThreadStarted(false)
{
EnterCriticalSection(&CritSection); //added
BufferInUse = &CyclicBuffer1; //buffer 1 will be used at beggining
...
}
void Logger::WorkerThread ( void *lpParam )
{
WorkerThreadStarted=true; //added
LeaveCriticalSection(&CritSection);
Logger *log = static_cast <Logger*> (lpParam);
if (NULL == log->hTimer)
{
log->LogStringToOutput("CreateWaitableTimer() failed , Logger will close \n");
return;
}
...
}
add such a function:
void Logger::EnsureInitiallized(){
EnterCriticalSection(&CritSection);
// Wait until the predicate is TRUE
while( !WorkerThreadStarted )
{
SleepConditionVariableCS(&ConditionVar, &CritSection, INFINITE);
}
LeaveCriticalSection(&CritSection);
}
and at every member function's entry, call EnsureInitiallized();
void Logger::yourFunction(){
EnsureInitiallized();
...
}
that is a example , you can also use a read_write lock , a atomic integer etc