Thread Terminating Early with Code 255 - c++

I'm attempting to run a part of my program in a thread and getting an unusual result.
I have updated this question with the results of the changes suggested by Remus, but as I am still getting an error, I feel the question is still open.
I have implemented functionality in a dll to tie into a piece of vendor software. Everything works until I attempt to create a thread inside this dll.
Here is the relevant section of the DLL:
extern "C" {
__declspec(dllexport) void __cdecl ccEntryOnEvent(WORD event);
}
to define the function the vendor's software calls, then:
using namespace std;
HANDLE LEETT_Thread = NULL;
static bool run_LEETT = true;
unsigned threadID;
void *lpParam;
int RunLEETTThread ( void ) {
LEETT_Thread = (HANDLE)_beginthreadex( NULL, 0, LEETT_Main, lpParam, 0 , &threadID );
//LEETT_Thread = CreateThread ( NULL, 0, LEETT_Main, lpParam, 0 , NULL );
if ( LEETT_Thread == NULL )
ErrorExit ( _T("Unable to start translator thread") );
run_LEETT = false; // We only wish to create the thread a single time.
return 0;
}
extern "C" void __cdecl ccEntryOnEvent(WORD event ) {
switch (event) {
case E_START:
if ( run_LEETT ) {
RunLEETTThread ();
MessageText ( "Running LEETT Thread" );
}
break;
}
WaitForSingleObject( LEETT_Thread ,INFINITE);
return;
}
The function is declared as
unsigned __stdcall LEETT_Main ( void* lpParam ) {
LEETT_Main is about 136k when compiled as a stand alone executable with no optimization (I have a separate file with a main() in it that calls the same function as myFunc).
Prior to changing the way the thread is called, the program would crash when declaring a structure containing a std::list, shown here:
struct stateFlags {
bool inComment; // multiline comments bypass parsing, but not line numbering
// Line preconditions
bool MCodeSeen; // only 1 m code per block allowed
bool GCodeSeen; // only 1 g code per block allowed
std::list <int> gotos; // a list of the destination line numbers
};
It now crashes on the _beginthreadex command, tracing through shows this
/*
* Allocate and initialize a per-thread data structure for the to-
* be-created thread.
*/
if ( (ptd = (_ptiddata)_calloc_crt(1, sizeof(struct _tiddata))) == NULL )
goto error_return;
Tracing through this I saw a error 252 (bad ptr) and ultimately 255 (runtime error).
I'm wondering if anyone has encountered this sort of behaviour creating threads (in dlls?) and what the remedy might be. When I create an instance of this structure in my toy program, there was no issue. When I removed the list variable the program simply crashed elsewhere, on the declaration of a string
I'm very open to suggestions at this point, if I have to I'll remove the idea of threading for now, though it's not particularly practical.
Thanks, especially to those reading this over again :)

Threads that use CRT (and std::list implies CRT) need to be created with _beginthreadex, as documented on MSDN:
A thread in an executable that calls the C run-time library (CRT)
should use the _beginthreadex and _endthreadex functions for thread
management rather than CreateThread and ExitThread;
Is not clear how you start your thread, but it appears that you're doing it in DllMain which is not recommended (see Does creating a thread from DllMain deadlock or doesn't it?).

In rechecking the comments here and the configuration of the project, the vendor supplied solution file uses /MTd for debug, but we are building a DLL, so I needed to use /MDd, which immediately compiles and runs correctly.
Sorry about the ridiculous head scratcher...

Related

CreateThread inside another thread

I am having an issue creating a thread inside of another thread. Normally I would be able to do this, but the reason for this issue is because I've Incremented Reference Count of the DLL which starts these threads. I need to start multiple threads inside this DLL. How can I get around this and be able to issue multiple CreateThread()'s when needed in my project without experiencing problems because of the Incremented Reference Count in my DLL?
Here is the function I've written to Increment Reference Count in my DLL file:
BOOL IncrementReference( HMODULE hModule )
{
if ( hModule == NULL )
return FALSE;
TCHAR ModulePath[ MAX_PATH + 1 ];
if ( GetModuleFileName( hModule , ModulePath , MAX_PATH ) == 0 )
return FALSE;
if ( LoadLibrary( ModulePath ) == NULL )
return FALSE;
return TRUE;
}
As requested, here is a PoC program to recreate the issue I am facing. I am really hoping this will help you guys point me to a solution. Also, take note, the DLL is being unloading due to conditions in the application which I am targeting (hooks that are already set in that application), so Incrementing the Reference Count is required for my thread to run in the first place.
Also, I can't run more than one operation in the main thread as it has its own functionality to take care of and another thread is required on the side to take care of something else. They must also run simultaneously, hence I need to fix this issue of making more than one thread in an Incremented DLL.
// dllmain.cpp : Defines the entry point for the DLL application.
#pragma comment( linker , "/Entry:DllMain" )
#include <Windows.h>
#include <process.h>
UINT CALLBACK SecondThread( PVOID pParam )
{
MessageBox( NULL , __FUNCTION__ , "Which Thread?" , 0 );
return 0;
}
UINT CALLBACK FirstThread( PVOID pParam )
{
MessageBox( NULL , __FUNCTION__ , "Which Thread?" , 0 );
_beginthreadex(0, 0, &SecondThread, 0, 0, 0);
return 0;
}
BOOL IncrementReference( HMODULE hModule )
{
if ( hModule == NULL )
return FALSE;
TCHAR ModulePath[ MAX_PATH + 1 ];
if ( GetModuleFileName( hModule , ModulePath , MAX_PATH ) == 0 )
return FALSE;
if ( LoadLibrary( ModulePath ) == NULL )
return FALSE;
return TRUE;
}
BOOL APIENTRY DllMain( HMODULE hModule,
DWORD ul_reason_for_call,
LPVOID lpReserved
)
{
switch (ul_reason_for_call)
{
case DLL_PROCESS_ATTACH:
{
if (IncrementReference(0))
_beginthreadex(0, 0, &FirstThread, 0, 0, 0);
}
break;
}
return TRUE;
}
As you can see, the code never executes the SecondThread function. The question is, why? And what can be done to fix it?
#pragma comment( linker , "/Entry:DllMain" )
That was a very bad idea, the proper entrypoint for a DLL is not in fact DllMain(). You have to keep in mind that WinMain and DllMain are just place-holder names. A way for Microsoft to document the relevance of executable file entrypoints. By convention you use those same names in your program, everybody will understand what they do.
But there's a very important additional detail in a C or C++ program, the CRT (C runtime library) needs to be initialized first. Before you can run any code that might make CRT function calls. Like _beginthreadex().
In other words, the default /ENTRY linker option is not DllMain(). The real entrypoint of a DLL is _DllMainCRTStartup(). A function inside the CRT that takes care of the required initialization, then calls DllMain(). If you wrote one in your program then that's the one that runs. If you didn't then a dummy one in the CRT gets linked.
All bets are off when you make CRT function calls and the CRT wasn't initialized. You must remove that #pragma so the linker will use the correct entrypoint.
According to MSDN you schould neither call LoadLibrary nor CreateThread inside DllMain - your code does both!
The MCVE as posted has three problems:
The first is a simple mistake, you're calling IncrementReference(0) instead of IncrementReference(hModule).
The second is that there is no entry point for rundll32 to use; the entry point argument is mandatory, or rundll32 won't work (I don't think it even loads the DLL).
The third is the #pragma as pointed out by Hans.
After fixing the IncrementReference() call, removing the #pragma and adding an entry point:
extern "C" __declspec(dllexport) void __stdcall EntryPoint(HWND, HINSTANCE, LPSTR, INT)
{
MessageBoxA( NULL , __FUNCTION__ , "Which Thread?" , 0 );
}
You can then run the DLL like this:
rundll32 testdll.dll,_EntryPoint#16
This works on my machine; EntryPoint, FirstThread and SecondThread all generate message boxes. Make sure you do not dismiss the message box from EntryPoint prematurely, as that will cause the application to exit, taking the other threads with it.
The call to LoadLibrary is still improper, however it does not appear to have any side-effects in this scenario (probably because the library in question is guaranteed to already be loaded).
(Previous) Answer:
The MCVE can be fixed by simply moving the call to IncrementReference from DllMain to FirstThread. That is the only safe and correct way to resolve the problem.
Addendum: as Hans pointed out, you'll also need to remove the /Entry pragma.
(Redundant?) Commentary:
If the application that is loading the DLL is misbehaving to the extent where the DLL is being unloaded before FirstThread can run, and assuming for the sake of argument that you can't fix it, the only realistic option is to work around the problem - for example, DllMain could suspend all the other threads in the process so that they cannot unload the DLL, and resume them from FirstThread after the call to IncrementReference.
Or you could try hooking FreeLibrary, or reverse engineering the loader and messing with the reference count directly, or removing the hooks the application has placed, or loading a separate copy of the DLL by hand inside DllMain (with your own DLL loader rather than the one Windows provides) or starting a separate process and working from there or, oh, no doubt there's any number of other possibilities, but at that point I'm afraid the question really is too broad for Stack Overflow, particularly since you can't give us the real details of what the application is doing.

VC++ 2010: Weird Critical Section error

My program is randomly crashing in a small scenario I can reproduce, but it happens in mlock.c (which is a VC++ runtime file) from ntdll.dll, and I can't see the stack trace. I do know that it happens in one of my thread functions, though.
This is the mlock.c code where the program crashes:
void __cdecl _unlock (
int locknum
)
{
/*
* leave the critical section.
*/
LeaveCriticalSection( _locktable[locknum].lock );
}
The error is "invalid handle specified". If I look at locknum, it's a number larger than _locktable's size, so this makes some sense.
This seems to be related to Critical Section usage. I do use CRITICAL_SECTIONS in my thread, via a CCriticalSection wrapper class and its associated RAII guard, CGuard. Definitions for both here to avoid even more clutter.
This is the thread function that's crashing:
unsigned int __stdcall CPlayBack::timerThread( void * pParams ) {
#ifdef _DEBUG
DRA::CommonCpp::SetThreadName( -1, "CPlayBack::timerThread" );
#endif
CPlayBack * pThis = static_cast<CPlayBack*>( pParams );
bool bContinue = true;
while( bContinue ) {
float m_fActualFrameRate = pThis->m_fFrameRate * pThis->m_fFrameRateMultiplier;
if( m_fActualFrameRate != 0 && pThis->m_bIsPlaying ) {
bContinue = ( ::WaitForSingleObject( pThis->m_hEndThreadEvent, static_cast<DWORD>( 1000.0f / m_fActualFrameRate ) ) == WAIT_TIMEOUT );
CImage img;
if( pThis->m_bIsPlaying && pThis->nextFrame( img ) )
pThis->sendImage( img );
}
else
bContinue = ( ::WaitForSingleObject( pThis->m_hEndThreadEvent, 10 ) == WAIT_TIMEOUT );
}
::GetErrorLoggerInstance()->Log( LOG_TYPE_NOTE, "CPlayBack", "timerThread", "Exiting thread" );
return 0;
}
Where does CCriticalSection come in? Every CImage object contains a CCriticalSection object which it uses through a CGuard RAII lock. Moreover, every CImage contains a CSharedMemory object which implements reference counting. To that end, it contains two CCriticalSection's as well, one for the data and one for the reference counter. A good example of these interactions is best seen in the destructors:
CImage::~CImage() {
CGuard guard(m_csData);
if( m_pSharedMemory != NULL ) {
m_pSharedMemory->decrementUse();
if( !m_pSharedMemory->isBeingUsed() ){
delete m_pSharedMemory;
m_pSharedMemory = NULL;
}
}
m_cProperties.ClearMin();
m_cProperties.ClearMax();
m_cProperties.ClearMode();
}
CSharedMemory::~CSharedMemory() {
CGuard guardUse( m_cs );
if( m_pData && m_bCanDelete ){
delete []m_pData;
}
m_use = 0;
m_pData = NULL;
}
Anyone bumped into this kind of error? Any suggestion?
Edit: I got to see some call stack: the call comes from ~CSharedMemory. So there must be some race condition there
Edit: More CSharedMemory code here
The "invalid handle specified" return code paints a pretty clear picture that your critical section object has been deallocated; assuming of course that it was allocated properly to begin with.
Your RAII class seems like a likely culprit. If you take a step back and think about it, your RAII class violates the Sepration Of Concerns principle, because it has two jobs:
It provides allocate/destroy semantics for the CRITICAL_SECTION
It provides acquire/release semantics for the CRITICAL_SECTION
Most implementations of a CS wrapper I have seen violate the SoC principle in the same way, but it can be problematic. Especially when you have to start passing around instances of the class in order to get to the acquire/release functionality. Consider a simple, contrived example in psudocode:
void WorkerThreadProc(CCriticalSection cs)
{
cs.Enter();
// MAGIC HAPPENS
cs.Leave();
}
int main()
{
CCriticalSection my_cs;
std::vector<NeatStuff> stuff_used_by_multiple_threads;
// Create 3 threads, passing the entry point "WorkerThreadProc"
for( int i = 0; i < 3; ++i )
CreateThread(... &WorkerThreadProc, my_cs);
// Join the 3 threads...
wait();
}
The problem here is CCriticalSection is passed by value, so the destructor is called 4 times. Each time the destructor is called, the CRITICAL_SECTION is deallocated. The first time works fine, but now it's gone.
You could kludge around this problem by passing references or pointers to the critical section class, but then you muddy the semantic waters with ownership issues. What if the thread that "owns" the crit sec dies before the other threads? You could use a shared_ptr, but now nobody really "owns" the critical section, and you have given up a little control in on area in order to gain a little in another area.
The true "fix" for this problem is to seperate concerns. Have one class for allocation & deallocation:
class CCriticalSection : public CRITICAL_SECTION
{
public:
CCriticalSection(){ InitializeCriticalSection(this); }
~CCriticalSection() { DestroyCriticalSection(this); }
};
...and another to handle locking & unlocking...
class CSLock
{
public:
CSLock(CRITICAL_SECTION& cs) : cs_(cs) { EnterCriticalSection(&cs_); }
~CSLock() { LeaveCriticalSection(&cs_); }
private:
CRITICAL_SECTION& cs_;
};
Now you can pass around raw pointers or references to a single CCriticalSection object, possibly const, and have the worker threads instantiate their own CSLocks on it. The CSLock is owned by the thread that created it, which is as it should be, but ownership of the CCriticalSection is clearly retained by some controlling thread; also a good thing.
Make sure Critical Section object is not in #pragma packing 1 (or any non-default packing).
Ensure that no other thread (or same thread) is corrupting the CS object. Run some static analysis tool to check for any buffer overrun problem.
If you have runtime analysis tool, do run it to find the issue.
I decided to adhere to the KISS principle and rock and roll all nite simplify things. I figured I'd replace the CSharedMemoryClass with a std::tr1::shared_ptr<BYTE> and a CCriticalSection which protects it from concurrent access. Both are members of CImage now, and concerns are better separated now, IMHO.
That solved the weird critical section, but now it seems I have a memory leak caused by std::tr1::shared_ptr, you might see me post about it soon... It never ends!

What is the most efficient way to make this code thread safe?

Some C++ library I'm working on features a simple tracing mechanism which can be activated to generate log files showing which functions were called and what arguments were passed. It basically boils down to a TRACE macro being spilled all over the source of the library, and the macro expands to something like this:
typedef void(*TraceProc)( const char *msg );
/* Sets 'callback' to point to the trace procedure which actually prints the given
* message to some output channel, or to a null trace procedure which is a no-op when
* case the given source file/line position was disabled by the client.
*
* This function also registers the callback pointer in an internal data structure
* and resets it to zero in case the filtering configuration changed since the last
* invocation of updateTraceCallback.
*/
void updateTraceCallback( TraceProc *callback, const char *file, unsinged int lineno );
#define TRACE(msg) \
{ \
static TraceProc traceCallback = 0; \
if ( !traceCallback ) \
updateTraceCallback( &traceCallback, __FILE__, __LINE__ ); \
traceCallback( msg ); \
}
The idea is that people can just say TRACE("foo hit") in their code and that will either
call a debug printing function or it will be a no-op. They can use some other API (which is not shown here) to configure that only TRACE uses in locations (source file/line number) should be printed. This configuration can change at runtime.
The issue with this is that this idea should now be used in a multi-threaded code base. Hence, the code which TRACE expands to needs to work correctly in the face of multiple threads of execution running the code simultaneously. There are about 20.000 different trace points in the code base right now and they are hit very often, so they should be rather efficient
What is the most efficient way to make this approach thread safe? I need a solution for Windows (XP and newer) and Linux. I'm afraid of doing excessive locking just to check whether the filter configuration changed (99% of the time a trace point is hit, the configuration didn't change). I'm open to larger changes to the macro, too. So instead of discussing mutex vs. critical section performance, it would also be acceptable if the macro just sent an event to an event loop in a different thread (assuming that accessing the event loop is thread safe) and all the processing happens in the same thread, so it's synchronized using the event loop.
UPDATE: I can probably simplify this question to:
If I have one thread reading a pointer, and another thread which might write to the variable (but 99% of the time it doesn't), how can I avoid that the reading thread needs to lock all the time?
You could implement a configuration file version variable. When your program starts it is set to 0. The macro can hold a static int that is the last config version it saw. Then a simple atomic comparison between the last seen and the current config version will tell you if you need to do a full lock and re-call updateTraceCallback();.
That way, 99% of the time you'll only add an extra atomic op, or memory barrier or something simmilar, which is very cheap. 1% of the time, just do the full mutex thing, it shouldn't affect your performance in any noticeable way, if its only 1% of the time.
Edit:
Some .h file:
extern long trace_version;
Some .cpp file:
long trace_version = 0;
The macro:
#define TRACE(msg)
{
static long __lastSeenVersion = -1;
static TraceProc traceCallback = 0;
if ( !traceCallback || __lastSeenVersion != trace_version )
updateTraceCallback( &traceCallback, &__lastSeenVersion, __FILE__, __LINE__ );
traceCallback( msg );
}
The functions for incrementing a version and updates:
static long oldVersionRefcount = 0;
static long curVersionRefCount = 0;
void updateTraceCallback( TraceProc *callback, long &version, const char *file, unsinged int lineno ) {
if ( version != trace_version ) {
if ( InterlockedDecrement( oldVersionRefcount ) == 0 ) {
//....free resources.....
//...no mutex needed, since no one is using this,,,
}
//....aquire mutex and do stuff....
InterlockedIncrement( curVersionRefCount );
*version = trace_version;
//...release mutex...
}
}
void setNewTraceCallback( TraceProc *callback ) {
//...aquire mutex...
trace_version++; // No locks, mutexes or anything, this is atomic by itself.
while ( oldVersionRefcount != 0 ) { //..sleep? }
InterlockedExchange( &oldVersionRefcount, curVersionRefCount );
curVersionRefCount = 0;
//.... and so on...
//...release mutex...
Of course, this is very simplified, since if you need to upgrade the version and the oldVersionRefCount > 0, then you're in trouble; how to solve this is up to you, since it really depends on your problem. My guess is that in those situations, you could simply wait until the ref count is zero, since the amount of time that the ref count is incremented should be the time it takes to run the macro.
I still don't fully understand the question, so please correct me on anything I didn't get.
(I'm leaving out the backslashes.)
#define TRACE(msg)
{
static TraceProc traceCallback = NULL;
TraceProc localTraceCallback;
localTraceCallback = traceCallback;
if (!localTraceCallback)
{
updateTraceBallback(&localTraceCallback, __FILE__, __LINE__);
// If two threads are running this at the same time
// one of them will update traceCallback and get it overwritten
// by the other. This isn't a big deal.
traceCallback = localTraceCallback;
}
// Now there's no way localTraceCallback can be null.
// An issue here is if in the middle of this executing
// traceCallback gets null'ed. But you haven't specified any
// restrictions about this either, so I'm assuming it isn't a problem.
localTraceCallback(msg);
}
Your comment says "resets it to zero in case the filtering configuration changes at runtime" but am I correct in reading that as "resets it to zero when the filtering configuration changes"?
Without knowing exactly how updateTraceCallback implements its data structure, or what other data it's referring to in order to decide when to reset the callbacks (or indeed to set them in the first place), it's impossible to judge what would be safe. A similar problem applies to knowing what traceCallback does - if it accesses a shared output destination, for example.
Given these limitations the only safe recommendation that doesn't require reworking other code is to stick a mutex around the whole lot (or preferably a critical section on Windows).
I'm afraid of doing excessive locking just to check whether the filter configuration changed (99% of the time a trace point is hit, the configuration didn't change). I'm open to larger changes to the macro, too. So instead of discussing mutex vs. critical section performance, it would also be acceptable if the macro just sent an event to an event loop in a different thread (assuming that accessing the event loop is thread safe)
How do you think thread safe messaging between threads is implemented without locks?
Anyway, here's a design that might work:
The data structure that holds the filter must be changed so that it is allocated dynamically from the heap because we are going to be creating multiple instances of filters. Also, it's going to need a reference count added to it. You need a typedef something like:
typedef struct Filter
{
unsigned int refCount;
// all the other filter data
} Filter;
There's a singleton 'current filter' declared somewhere.
static Filter* currentFilter;
and initialised with some default settings.
In your TRACE macro:
#define TRACE(char* msg)
{
static Filter* filter = NULL;
static TraceProc traceCallback = NULL;
if (filterOutOfDate(filter))
{
getNewCallback(__FILE__, __LINE__, &traceCallback, &filter);
}
traceCallback(msg);
}
filterOutOfDate() merely compares the filter with currentFilter to see if it is the same. It should be enough to just compare addresses. It does no locking.
getNewCallback() applies the current filter to get the new trace function and updates the filter passed in with the address of the current filter. It's implementation must be protected with a mutex lock. Also, it decremetns the refCount of the original filter and increments the refCount of the new filter. This is so we know when we can free the old filter.
void getNewCallback(const char* file, int line, TraceProc* newCallback, Filter** filter)
{
// MUTEX lock
newCallback = // whatever you need to do
currentFilter->refCount++;
if (*filter != NULL)
{
*filter->refCount--;
if (*filter->refCount == 0)
{
// free filter and associated resources
}
}
*filter = currentFilter;
// MUTEX unlock
}
When you want to change the filter, you do something like
changeFilter()
{
Filter* newFilter = // build a new filter
newFilter->refCount = 0;
// MUTEX lock (same mutex as above)
currentFilter = newFilter;
// MUTEX unlock
}
If I have one thread reading a pointer, and another thread which might write to the variable (but 99% of the time it doesn't), how can I avoid that the reading thread needs to lock all the time?
From your code, it is OK to use the mutex inside the updateTraceCallback() since it is going to be called very rarely (once per location). After taking the mutex, check whether the traceCallback is already initialized: if yes, then other thread just did it for you and there is nothing to be done.
If updateTraceCallback() would turn out to be a serious performance problem due to the collisions on the global mutex, then you can simply make an array of mutexes instead and use hashed value of the traceCallback pointer as an index into the mutex array. That would spread locking over many mutexes and minimize number of collisions.
#define TRACE(msg) \
{ \
static TraceProc traceCallback = \
updateTraceBallback( &traceCallback, __FILE__, __LINE__ ); \
traceCallback( msg ); \
}

C++ libpthread program segfaults for unknown reason

I have a libpthread linked application. The core of the application are two FIFOs shared by four threads ( two threads per one FIFO that is ;). The FIFO class is synchronized using pthread mutexes and it stores pointers to big classes ( containing buffers of about 4kb size ) allocated inside static memory using overloaded new and delete operators ( no dynamic allocation here ).
The program itself usually works fine, but from time to time it segfaults for no visible reason. The problem is, that I can't debug the segfaults properly as I'm working on an embedded system with an old linux kernel (2.4.29) and g++ (gcc version egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)).
There's no gdb on the system, and I can't run the application elsewhere ( it's too hardware specific ).
I compiled the application with -g and -rdynamic flags, but an external gdb tells me nothing when I examine the core file ( only hex addresses ) - still I can print the backtrace from the program after catching SIGSEGV - it always looks like this:
Backtrace for process with pid: 6279
-========================================-
[0x8065707]
[0x806557a]
/lib/libc.so.6(sigaction+0x268) [0x400bfc68]
[0x8067bb9]
[0x8067b72]
[0x8067b25]
[0x8068429]
[0x8056cd4]
/lib/libpthread.so.0(pthread_detach+0x515) [0x40093b85]
/lib/libc.so.6(__clone+0x3a) [0x4015316a]
-========================================-
End of backtrace
So it seems to be pointing to libpthread...
I ran some of the modules through valgrind, but I didn't find any memory leaks (as I'm barely using any dynamic allocation ).
I thought that maybe the mutexes are causing some trouble ( as they are being locked/unlocked about 200 times a second ) so I switched my simple mutex class:
class AGMutex {
public:
AGMutex( void ) {
pthread_mutex_init( &mutex1, NULL );
}
~AGMutex( void ) {
pthread_mutex_destroy( &mutex1 );
}
void lock( void ) {
pthread_mutex_lock( &mutex1 );
}
void unlock( void ) {
pthread_mutex_unlock( &mutex1 );
}
private:
pthread_mutex_t mutex1;
};
to a dummy mutex class:
class AGMutex {
public:
AGMutex( void ) : mutex1( false ) {
}
~AGMutex( void ) {
}
volatile void lock( void ) {
if ( mutex1 ) {
while ( mutex1 ) {
usleep( 1 );
}
}
mutex1 = true;
}
volatile void unlock( void ) {
mutex1 = false;
}
private:
volatile bool mutex1;
};
but it changed nothing and the backtrace looks the same...
After some oldchool put-cout-between-every-line-and-see-where-it-segfaults-plus-remember-the-pids-and-stuff debugging session it seems that it segfaults during usleep (?).
I have no idea what else could be wrong. It can work for an hour or so, and then suddenly segfault for no apparent reason.
Has anybody ever encountered a similar problem?
From my answer to How to generate a stacktrace when my gcc C++ app crashes:
The first two entries in the stack frame chain when you get into the
signal handler contain a return address inside the signal handler and
one inside sigaction() in libc. The stack frame of the last function
called before the signal (which is the location of the fault) is lost.
This may explain why you are having difficulties determining the location of your segfault via a backtrace from a signal handler. My answer also includes a workaround for this limitation.
If you want to see how your application actually is laid out in memory (i.e. 0x80..... addresses), you should be able to generate a map file from gcc. This typically done via -Wl,-Map,output.map, which passes -Map output.map to the linker.
You may also have a hardware-specific version of objdump or nm with your toolchain/cross-toolchain that may be helpful in deciphering your 0x80..... addresses.
Do you have access to Helgrind on your platform? It's a Valgrind tool for detecting POSIX thread errors such as races and threads holding mutexes when they exit.

Use of threads in C++

Can you tell me how can I use threads in C++ programs, and how can I compile it as it will be multithreaded? Can you tell me some good site where I can start from root?
Thanks
I haven't used it myself, but I'm told that the Boost thread libraries make it incredibly easy.
http://www.boost.org/doc/libs/1_37_0/doc/html/thread.html
For Unix/Linux/BSD, there's pthread library: tutorial.
I guess there are equivalent in Win32 API.
Process and Threads
Synchronization
I use tbb_thread class from intel threading building blocks library.
There are many threads libraries wich are compatible with c++. So at first you must select one. I prefer OpenMP or POSIX threads (also known as pthreads). How to compile it depends on library you have choose.
I use a library my university prof wrote. It is very simple to implement and works really well (used it for quite some time now). I will ask his permission to share it with you.
Sorry for the wait ahead, but gotta check :)
++++++EDIT+++++++
Ok, so I talked to my prof and he doesn't mind if I share it here. Below are the .h and .cpp files for the 'RT Library' written by Paul Davies
http://www.filefactory.com/file/7efbeb/n/rt_h
http://www.filefactory.com/file/40d9a6/n/rt_cpp
Some points to be made about threads and the use of this library:
0) This tutorial will explain thread creation and use on a windows platform.
1) Threads in c++ are usually coded as part of the same source (unlike processes where each process has its own source file and function main() )
2) When a process is up and running, it can create other threads by making appropriate Kernel calls.
3) Multiple threads run faster than multiple processes since they are a part of the same process which results in less of an overhead for the OS, and reduced memory requirements.
4) What you will be using in your case is the CThread class in the rt library.
5) (Make sure rt.h and rt.cpp are a part of your 'solution' and make sure to include rt.h in your main.cpp)
6) Below is a part of code from your future main thread (in main.cpp, of course) where you will create the thread using the CThread class.
void main()
{
CThread t1(ChildThread1, ACTIVE, NULL) ;
. . .
t1.WaitForThread() ; // if thread already dead, then proceed, otherwise wait
}
The arguments of t1 in order are: Name of the function acting as our thread, the thread status (it can be either ACTIVE or SUSPENDED - depending on what you want), and last, a pointer to an optional data you may want to pass to the thread at creation. After you execute some code, you'll want to call the WaitForThread() function.
7) Below is a part of code from your future main thread (in main.cpp, of course) where you will describe what the child thread does.
UINT _ _stdcall ChildThread1(void *args)
{
. . .
}
The odd looking thing there is Microsoft's thread signature. I'm sure with a bit of research you can figure out how to do this in other OSs. The argument is the optional data that could be passed to the child at creation.
8) You can find the detailed descriptions of the member functions in the rt.cpp file. Here are the summaries:
CThread() - The constructor responsible for creating the thread
Suspend() - Suspends a child thread effectively pausing it.
Resume() - Wakes up a suspended child thread
SetPriority(int value) - Changes the priority of a child thread to the value
specified
Post(int message) - Posts a message to a child thread
TerminateThread() - Terminates or Kills a child thread
WaitForThread() - Pauses the parent thread until a child thread terminates.
If the child thread has already terminated, parent will not pause
9) Below is an example of a sample complete program. A clever thing you can do is create multiple instantiations of a single thread.
#include “..\wherever\it\is\rt.h” //notice the windows notation
int ThreadNum[8] = {0,1,2,3,4,5,6,7} ; // an array of thread numbers
UINT _ _stdcall ChildThread (void *args) // A thread function
{
MyThreadNumber = *(int *)(args);
for ( int i = 0; i < 100; i ++)
printf( "I am the Child thread: My thread number is [%d] \n", MyThreadNumber) ;
return 0 ;
}
int main()
{
CThread *Threads[8] ;
// Create 8 instances of the above thread code and let each thread know which number it is.
for ( int i = 0; i < 8; i ++) {
printf ("Parent Thread: Creating Child Thread %d in Active State\n", i) ;
Threads[i] = new CThread (ChildThread, ACTIVE, &ThreadNum[i]) ;
}
// wait for threads to terminate, then delete thread objects we created above
for( i = 0; i < 8; i ++) {
Threads[i]->WaitForThread() ;
delete Threads[i] ; // delete the object created by ‘new’
}
return 0 ;
}
10) That's it! The rt library includes a bunch of classes that enables you to work with processes and threads and other concurrent programming techniques. Discover the rest ;)
You may want to read my earlier posting on SO.
(In hindsight, that posting is a little one-sided towards pthreads. But I'm a Unix/Linux kind of guy. And that approach seemed best with respect to the original topic.)
Usage of threads in C/C++:
#include <iostream>
using namespace std;
extern "C"
{
#include <stdlib.h>
#include <pthread.h>
void *print_message_function( void *ptr );
}
int main()
{
pthread_t thread1, thread2;
char *message1 = "Thread 1";
char *message2 = "Thread 2";
int iret1, iret2;
iret1 = pthread_create( &thread1, NULL, print_message_function (void*) message1);
iret2 = pthread_create( &thread2, NULL, print_message_function, (void*) message2);
pthread_join( thread1, NULL);
pthread_join( thread2, NULL);
//printf("Thread 1 returns: %d\n",iret1);
//printf("Thread 2 returns: %d\n",iret2);
cout<<"Thread 1 returns: %d\n"<<iret1;
cout<<"Thread 2 returns: %d\n"<<iret2;
exit(0);
}
void *print_message_function( void *ptr )
{
char *message;
message = (char *) ptr;
//printf("%s \n", message);
cout<<"%s"<<message;
}