i have multithreaded project and i've run it through valgrind with --tool=helgrind and it showed me few errors. I'm using mutex there exaxtly how i found on the net how to use it, can you please show me whats wrong?
#include <iostream>
#include <pthread.h>
#define MAX_THREADS 100
#define MAX_SESSIONS 100
static pthread_mutex_t M_CREATE_SESSION_LOCK= PTHREAD_MUTEX_INITIALIZER;
.....
void connection::proccess(threadVarsType &THREAD) {
....
pthread_mutex_lock(&M_CREATE_SESSION_LOCK);
unsigned int ii;
for (ii=0; ii<MAX_SESSIONS; ii++) {
if (SESSION[ii]==NULL) {
break;
}
}
if (ii==MAX_SESSIONS-1) {
....
pthread_mutex_unlock(&M_CREATE_SESSION_LOCK); // unlock session mutex
....
return;
} else {
....
pthread_mutex_unlock(&M_CREATE_SESSION_LOCK); // unlock session mutex
....
}
....
}
and the error messages:
==4985== Thread #1's call to pthread_mutex_lock failed
==4985== with error code 22 (EINVAL: Invalid argument)
....
==4985== Thread #1 unlocked an invalid lock at 0x4E7B40
==4985== at 0x32CD8: pthread_mutex_unlock (hg_intercepts.c:610)
....
==4985== Thread #1's call to pthread_mutex_unlock failed
==4985== with error code 22 (EINVAL: Invalid argument)
....
==4985== Thread #1's call to pthread_mutex_lock failed
==4985== with error code 22 (EINVAL: Invalid argument)
....
==4985== Thread #1 unlocked an invalid lock at 0x4E7B40
==4985== at 0x32CD8: pthread_mutex_unlock (hg_intercepts.c:610)
....
==4985== Thread #1's call to pthread_mutex_unlock failed
==4985== with error code 22 (EINVAL: Invalid argument)
First, always check the return values of your function calls. If a pthread call fails, it's a good choice to just call abort() which will core-dump if you have that enabled or drop into the debugger if you are running with one.
The pthread function calls really should never fail, which means that something is seriously wrong with your program. In a C or C++ program something that commonly causes mysterious failures is memory corruption. Use valgrind in its normal modes to check for that.
Another thing that can cause pthread calls to fail is to not compile using -pthread. If using GCC you should compile and link using gcc with a command like gcc -pthread. That will link the pthread library and it will set some preprocessor defines that may be important for your system's header files.
Some systems will successfully compile and link a program that is using pthread calls without linking it to the pthread libraries. This is done so that a program or library can be made thread-safe without actually using threads. The thread calls will be linked to dummy functions unless the real pthread library is linked. That can lead to some function calls failing.
So make sure you are building with the correct compiler options to include the pthread libraries.
Another possible cause is if you are building on some whacked-out half-and-half hybrid OS where it started as Linux 2.4 and got upgraded to Linux 2.6 NPTL at some point (I worked on something like this once). If you are attempting to compile against old header files with an outdated definition of PTHREAD_MUTEX_INITIALIZER or the wrong size for the type of pthread_mutex_t then that could cause the problem.
That error suggests something is wrong with the initialization of the mutex. It's hard to way what, but make sure you're initializing it in the right place.
On the Helgrind docs page, they mention that there could be false positives that are suppose to be suppressed ... somehow you might be bumping into those since on the surface it does not seem like you're using pthread mutexes incorrectly.
Here's what they write:
Helgrind's error checks do not work
properly inside the system threading
library itself (libpthread.so), and it
usually observes large numbers of
(false) errors in there. Valgrind's
suppression system then filters these
out, so you should not see them.
If you see any race errors reported
where libpthread.so or ld.so is the
object associated with the innermost
stack frame, please file a bug report
at http://www.valgrind.org/.
They also note that you should be using a "supported Linux distribution" ... they don't mention what exactly that means, but if you're using a non-Linux OS, that could also possibly cause some of these "false positives". It might be worth asking the development team to see what they say about this.
The error EINVAL on a call to pthread_mutex_lock means one of two things.
The mutex was created with the protocol attribute having the value PTHREAD_PRIO_PROTECT and the calling thread's priority is higher than the mutex's current priority ceiling.
or
The value specified by mutex does not refer to an initialised mutex object.
The second one seems more likely. Try initializing the mutex in your main function with int error = pthread_mutex_init(&M_CREATE_SESSION_LOCK, NULL); and check if there is an error, instead of initializing it with the macro like you are currently.
Related
I write a DLL MyDLL.dll with Visual C++ 2008, as follows:
(1) MFC static linked
(2) Using multi-thread runtime library.
In the DLL, this is a global data m_Data shared by two export functions, as follows:
ULONGLONG WINAPI MyFun1(LPVOID *lpCallbackFun1)
{
...
Write m_Data(using Critical section to protect)
…
return xxx;
}
ULONGLONG WINAPI MyFun2(LPVOID *lpCallbackFun2)
{
...
Suspend MyThread1 to prevent conflict.
Read m_Data(using Critical section to protect)
Resume MyThread1.
…
return xxx;
}
In in my main application, it will first call LoadLibrary to load MyDLL.dll, then get the address of MyFun1 and MyFun2, then do the following thing:
(1) Start a new thread MyThread1, which will invoke MyFun1 to do a time-consuming task.
(2) Start a new thread MyThread2, which will invoke MyFun2 for several times, as follows:
for (nIndex = 0; nIndex = 20; nIndex)
{
nResult2 = MyFun2(lpCallbackFun2);
NextStatement2;
}
Although MyThread1 and MyThread2 using critical section to protect the shared data m_Data, I will still suspend MyThread1 before accessing the shared data, to prevent any possible conflicts.
The problem is:
(1) When the first invoke of MyFun2, everything is OK, and the return value of MyFun2(that is nResult2) is 1 , which is expected.
(2) When the second, third and fourth invoke of MyFun2, the operations in MyFun2 are executed successfully, but the return value of MyFun2(that is nResult2) is a random value instead of the expected value 1. I try to using Debug to trace into MyFun2, and confirm that the last return statement is just return a value of 1, but the invoker will receive a random value instead of 1 when inspecting nResult2.
(3) After the fourth invoke of MyFun2 and return back to the next statement follow MyFun2, I will always get a “buffer overrun detected” error, whatever the next statement is.
I think this looks like a stack corruption, so try to make some tests:
I confirm the /GS (Stack security check) feature in the compiler is ON.
If MyFun2 is invoked after MyFun1 in MyThread1 is completed, then everything will be OK.
In debug mode, the codeline in MyFun2 that reads the shared data m_Data will not cause any errors or exceptions. Neither will the codeline in MyFun1 that writes the shared Data.
So, how to solve this problem
Thank you!
I suppose at this line
Suspend MyThread1 to prevent conflict.
you are using SuspendThread() function. That's what its documentation says:
This function is primarily designed for use by debuggers. It is not intended to be used for thread synchronization. Calling SuspendThread on a thread that owns a synchronization object, such as a mutex or critical section, can lead to a deadlock if the calling thread tries to obtain a synchronization object owned by a suspended thread. To avoid this situation, a thread within an application that is not a debugger should signal the other thread to suspend itself. The target thread must be designed to watch for this signal and respond appropriately.
So, in short: don't use it. Critical sections and other synchronization objects do their job just fine.
Never use SupsendThread!!! NEVER!
SuspendThread is only used for Debugging purpose.
The reason is simple. You don't know where you suspend the thread. It may be just in time, when the thread blocks a resource that you want to use. Also a bunch of CRT function use thread synchronisation.
Just use critcal sectins or mutexes.
Just see the simple sample here: http://blog.kalmbachnet.de/?postid=6 and here
http://blog.kalmbachnet.de/?postid=16
Since this is a windows program you could use windows based mutex or semaphore and WaitForSingleObject when reading or writing shared data.
I created a thread using C++11 thread class and I want the thread to sleep in a loop.
When the this_thread::sleep_for() function is called, I get exception saying:
Run-Time Check Failure #2 - Stack around the variable '_Now' was
corrupted.
My code is below:
std::chrono::milliseconds duration( 5000 );
while (m_connected)
{
this->CheckConnection();
std::this_thread::sleep_for(duration);
}
I presume _Now is a local variable somewhere deep in implementation of sleep_for. If it gets corrupt, either there is bug in that function (unlikely) or some other part of your application is writing to dangling pointers (much more likely).
The most likely cause is that you, some time before calling the sleep_for, give out pointer to local variable that stays around and is written to by other thread while this thread sleeps.
If you were on Linux, I'd recommend you to try valgrind (though I am not certain it can catch invalid access to stack), but on Windows I don't know about any tool for debugging this kind of problems. You can do careful review and you can try disabling various parts of functionality to see when the problem goes away to narrow down where it might be.
I also used to use duma library with some success, but it can only catch invalid access to heap, not stack.
Note: Both clang and gcc are further in implementing C++11 than MSVC++, so if you don't use much Windows-specific stuff, it might be easy to port and try valgrind on it. Gcc and especially clang are also known for giving much better static diagnostics than MSVC++, so if you compile it with gcc or clagn, you may get some warning that will point you to the problem.
I have a need for interprocess synchronization around a piece of hardware. Because this code will need to work on Windows and Linux, I'm wrapping with Boost Interprocess mutexes. Everything works well accept my method for checking abandonment of the mutex. There is the potential that this can happen and so I must prepare for it.
I've abandoned the mutex in my testing and, sure enough, when I use scoped_lock to lock the mutex, the process blocks indefinitely. I figured the way around this is by using the timeout mechanism on scoped_lock (since much time spent Googling for methods to account for this don't really show much, boost doesn't do much around this because of portability reasons).
Without further ado, here's what I have:
#include <boost/interprocess/sync/named_recursive_mutex.hpp>
#include <boost/interprocess/sync/scoped_lock.hpp>
typedef boost::interprocess::named_recursive_mutex MyMutex;
typedef boost::interprocess::scoped_lock<MyMutex> ScopedLock;
MyMutex* pGate = new MyMutex(boost::interprocess::open_or_create, "MutexName");
{
// ScopedLock lock(*pGate); // this blocks indefinitely
boost::posix_time::ptime timeout(boost::posix_time::microsec_clock::local_time() + boost::posix_time::seconds(10));
ScopedLock lock(*pGate, timeout); // a 10 second timeout that returns immediately if the mutex is abandoned ?????
if(!lock.owns()) {
delete pGate;
boost::interprocess::named_recursive_mutex::remove("MutexName");
pGate = new MyMutex(boost::interprocess::open_or_create, "MutexName");
}
}
That, at least, is the idea. Three interesting points:
When I don't use the timeout object, and the mutex is abandoned, the ScopedLock ctor blocks indefinitely. That's expected.
When I do use the timeout, and the mutex is abandoned, the ScopedLock ctor returns immediately and tells me that it doesn't own the mutex. Ok, perhaps that's normal, but why isn't it waiting for the 10 seconds I'm telling it too?
When the mutex isn't abandoned, and I use the timeout, the ScopedLock ctor still returns immediately, telling me that it couldn't lock, or take ownership, of the mutex and I go through the motions of removing the mutex and remaking it. This is not at all what I want.
So, what am I missing on using these objects? Perhaps it's staring me in the face, but I can't see it and so I'm asking for help.
I should also mention that, because of how this hardware works, if the process cannot gain ownership of the mutex within 10 seconds, the mutex is abandoned. In fact, I could probably wait as little as 50 or 60 milliseconds, but 10 seconds is a nice "round" number of generosity.
I'm compiling on Windows 7 using Visual Studio 2010.
Thanks,
Andy
When I don't use the timeout object, and the mutex is abandoned, the ScopedLock ctor blocks indefinitely. That's expected
The best solution for your problem would be if boost had support for robust mutexes. However Boost currently does not support robust mutexes. There is only a plan to emulate robust mutexes, because only linux has native support on that. The emulation is still just planned by Ion Gaztanaga, the library author.
Check this link about a possible hacking of rubust mutexes into the boost libs:
http://boost.2283326.n4.nabble.com/boost-interprocess-gt-1-45-robust-mutexes-td3416151.html
Meanwhile you might try to use atomic variables in a shared segment.
Also take a look at this stackoverflow entry:
How do I take ownership of an abandoned boost::interprocess::interprocess_mutex?
When I do use the timeout, and the mutex is abandoned, the ScopedLock ctor returns immediately and tells me that it doesn't own the mutex. Ok, perhaps that's normal, but why isn't it waiting for the 10 seconds I'm telling it too?
This is very strange, you should not get this behavior. However:
The timed lock is possibly implemented in terms of the try lock. Check this documentation:
http://www.boost.org/doc/libs/1_53_0/doc/html/boost/interprocess/scoped_lock.html#idp57421760-bb
This means, the implementation of the timed lock might throw an exception internally and then returns false.
inline bool windows_mutex::timed_lock(const boost::posix_time::ptime &abs_time)
{
sync_handles &handles =
windows_intermodule_singleton<sync_handles>::get();
//This can throw
winapi_mutex_functions mut(handles.obtain_mutex(this->id_));
return mut.timed_lock(abs_time);
}
Possibly, the handle cannot be obtained, because the mutex is abandoned.
When the mutex isn't abandoned, and I use the timeout, the ScopedLock ctor still returns immediately, telling me that it couldn't lock, or take ownership, of the mutex and I go through the motions of removing the mutex and remaking it. This is not at all what I want.
I am not sure about this one, but I think the named mutex is implemented by using a shared memory. If you are using Linux, check for the file /dev/shm/MutexName. In Linux, a file descriptor remains valid until that is not closed, no matter if you have removed the file itself by e.g. boost::interprocess::named_recursive_mutex::remove.
Check out the BOOST_INTERPROCESS_ENABLE_TIMEOUT_WHEN_LOCKING and BOOST_INTERPROCESS_TIMEOUT_WHEN_LOCKING_DURATION_MS compile flags. Define the first symbol in your code to force the interprocess mutexes to time out and the second symbol to define the timeout duration.
I helped to get them added to the library to solve the abandoned mutex issue. It was necessary to add it due to many interprocess constructs (like message_queue) that rely on the simple mutex rather than the timed mutex. There may be a more robust solution in the future, but this solution has worked just fine for my interprocess needs.
I'm sorry I can't help you with your code at the moment; something is not working correctly there.
BOOST_INTERPROCESS_ENABLE_TIMEOUT_WHEN_LOCKING is not so good. It throws an exception and does not help much. To workaround exceptional behaviour I wrote this macro. It works just alright for common purposed. In this sample named_mutex is used. The macro creates a scoped lock with a timeout, and if the lock cannot be acquired for EXCEPTIONAL reasons, it will unlock it afterwards. This way the program can lock it again later and does not freeze or crash immediately.
#define TIMEOUT 1000
#define SAFELOCK(pMutex) \
boost::posix_time::ptime wait_time \
= boost::posix_time::microsec_clock::universal_time() \
+ boost::posix_time::milliseconds(TIMEOUT); \
boost::interprocess::scoped_lock<boost::interprocess::named_mutex> lock(*pMutex, wait_time); \
if(!lock.owns()) { \
pMutex->unlock(); }
But even this is not optimal, because the code to be locked now runs unlocked once. This may cause problems. You can easily extend the macro however. E.g. run code only if lock.owns() is true.
boost::interprocess::named_mutex has 3 defination:
on windows, you can use macro to use windows mutex instead of boost mutex, you can try catch the abandoned exception, and you should unlock it!
on linux, the boost has pthread_mutex, but it not robust attribute in 1_65_1version
so I implemented interprocess_mutex myself use system API(windows Mutex and linux pthread_mutex process shared mode), but windows Mutex is in the kernel instead of file.
Craig Graham answered this in a reply already but I thought I'd elaborate because I found this, didn't read his message, and beat my head against it to figure it out.
On a POSIX system, timed lock calls:
timespec ts = ptime_to_timespec(abs_time);
pthread_mutex_timedlock(&m_mut, &ts)
Where abs_time is the ptime that the user passes into interprocess timed_lock.
The problem is, that abs_time must be in UTC, not system time.
Assume that you want to wait for 10 seconds; if you're ahead of UTC your timed_lock() will return immediately,
and if you're behind UTC, your timed_lock() will return in hours_behind - 10 seconds.
The following ptime times out an interprocess mutex in 10 seconds:
boost::posix_time::ptime now = boost::posix_time::second_clock::universal_time() +
boost::posix_time::seconds(10);
If I use ::local_time() instead of ::universal_time(), since I'm ahead of UTC, it returns immediately.
The documentation fails to mention this.
I haven't tried it, but digging into the code a bit, it looks like the same problem would occur on a non-POSIX system.
If BOOST_INTERPROCESS_POSIX_TIMEOUTS is not defined, the function ipcdetail::try_based_timed_lock(*this, abs_time) is called.
It uses universal time as well, waiting on while(microsec_clock::universal_time() < abs_time).
This is only speculation, as I don't have quick access to a Windows system to test this on.
For full details, see https://www.boost.org/doc/libs/1_76_0/boost/interprocess/sync/detail/common_algorithms.hpp
For purposes of thread local cleanup I need to create an assertion that checks if the current thread was created via boost::thread. How can I can check if this was the case? That is, how can I check if the current thread is handled by boost::thread?
I simply need this to do a cleanup of thread local storage when the thread exits. Boost's thread_local_ptr appears to only work if the thread itself is a boost thread.
Note that I'm not doing the check at cleanup time, but sometime during the life of the thread. Some function calls one of our API/callbacks (indirectly) causing me to allocate thread-local storage. Only boost threads are allowed to do this, so I need to detect at that moment if the thread is not a boost thread.
Refer to Destruction of static class members in Thread local storage for the problem of not having a generic cleanup handler. I answered that and realized pthread_clenaup_push won't actually work: it isn't called on a clean exit form the thread.
While I don't have answer to detect a boost thread the chosen answer does solve the root of my problem. Boost thread_specific_ptr's will call their cleanup in any pthread. It must have been something else causing it not to work for me, as an isolated test shows that it does work.
The premise for your question is mistaken :) boost::thread_specific_ptr works even if the thread is not a boost thread. Think about it -- how would thread specific storage for the main thread work, seeing as it's impossible for it to be created by boost? I have used boost::thread_specific_ptr from the main thread fine, and although I haven't examined boost::thread_specific_ptr's implementation, the most obvious way of implementing it would work even for non-boost threads. Most operating systems let you get a unique ID number for the current thread, which you can then use as an index into a map/array/hashtable.
More likely you have a different bug that prevents the behavior you're expecting to see from happening. You should open a separate question with a small compilable code sample illustrating the unexpected behavior.
You can't do this with a static assertion: That would mean you could detect it at compile time, and that's impossible.
Assuming you mean a runtime check though:
If you don't mix boost::thread with other methods, then the problem just goes away. Any libraries that are creating threads should already be dealing with their own threads automatically (or per a shutdown function the API documents that you must call).
Otherwise you can keep, for example, a container of all pthread_ts you create not using boost::thread and check if the thread is in the container when shutting down. If it's not in the container then it was created using boost::thread.
EDIT: Instead of trying to detect if it was created with boost::thread, have you considered setting up your application so that the API callback can only occur in threads created with boost::thread? This way you prevent the problem up front and eliminate the need for a check that, if it even exists, would be painful to implement.
Each time a boost thread ends, all the Thread Specific Data gets cleaned. TSD is a pointer, calling delete p* at destruction/reset.
Optionally, instead of delete p*, a cleanup handler can get called for each item. That handler is specified on the TLS constructor, and you can use the cleanup function to do the one time cleaning.
#include <iostream>
#include <boost/thread/thread.hpp>
#include <boost/thread/tss.hpp>
void cleanup(int* _ignored) {
std::cout << "TLS cleanup" << std::endl;
}
void thread_func() {
boost::thread_specific_ptr<int> x(cleanup);
x.reset((int*)1); // Force cleanup to be called on this thread
std::cout << "Thread begin" << std::endl;
}
int main(int argc, char** argv) {
boost::thread::thread t(thread_func);
t.join();
return 0;
}
Is the following safe?
I am new to threading and I want to delegate a time consuming process to a separate thread in my C++ program.
Using the boost libraries I have written code something like this:
thrd = new boost::thread(boost::bind(&myclass::mymethod, this, &finished_flag);
Where finished_flag is a boolean member of my class. When the thread is finished it sets the value and the main loop of my program checks for a change in that value.
I assume that this is okay because I only ever start one thread, and that thread is the only thing that changes the value (except for when it is initialised before I start the thread)
So is this okay, or am I missing something, and need to use locks and mutexes, etc
You never mentioned the type of finished_flag...
If it's a straight bool, then it might work, but it's certainly bad practice, for several reasons. First, some compilers will cache the reads of the finished_flag variable, since the compiler doesn't always pick up the fact that it's being written to by another thread. You can get around this by declaring the bool volatile, but that's taking us in the wrong direction. Even if reads and writes are happening as you'd expect, there's nothing to stop the OS scheduler from interleaving the two threads half way through a read / write. That might not be such a problem here where you have one read and one write op in separate threads, but it's a good idea to start as you mean to carry on.
If, on the other hand it's a thread-safe type, like a CEvent in MFC (or equivilent in boost) then you should be fine. This is the best approach: use thread-safe synchronization objects for inter-thread communication, even for simple flags.
Instead of using a member variable to signal that the thread is done, why not use a condition? You are already are using the boost libraries, and condition is part of the thread library.
Check it out. It allows the worker thread to 'signal' that is has finished, and the main thread can check during execution if the condition has been signaled and then do whatever it needs to do with the completed work. There are examples in the link.
As a general case I would neve make the assumption that a resource will only be modified by the thread. You might know what it is for, however someone else might not - causing no ends of grief as the main thread thinks that the work is done and tries to access data that is not correct! It might even delete it while the worker thread is still using it, and causing the app to crash. Using a condition will help this.
Looking at the thread documentation, you could also call thread.timed_join in the main thread. timed_join will wait for a specified amount for the thread to 'join' (join means that the thread has finsihed)
I don't mean to be presumptive, but it seems like the purpose of your finished_flag variable is to pause the main thread (at some point) until the thread thrd has completed.
The easiest way to do this is to use boost::thread::join
// launch the thread...
thrd = new boost::thread(boost::bind(&myclass::mymethod, this, &finished_flag);
// ... do other things maybe ...
// wait for the thread to complete
thrd.join();
If you really want to get into the details of communication between threads via shared memory, even declaring a variable volatile won't be enough, even if the compiler does use appropriate access semantics to ensure that it won't get a stale version of data after checking the flag. The CPU can issue reads and writes out of order as long (x86 usually doesn't, but PPC definitely does) and there is nothing in C++9x that allows the compiler to generate code to order memory accesses appropriately.
Herb Sutter's Effective Concurrency series has an extremely in depth look at how the C++ world intersects the multicore/multiprocessor world.
Having the thread set a flag (or signal an event) before it exits is a race condition. The thread has not necessarily returned to the OS yet, and may still be executing.
For example, consider a program that loads a dynamic library (pseudocode):
lib = loadLibrary("someLibrary");
fun = getFunction("someFunction");
fun();
unloadLibrary(lib);
And let's suppose that this library uses your thread:
void someFunction() {
volatile bool finished_flag = false;
thrd = new boost::thread(boost::bind(&myclass::mymethod, this, &finished_flag);
while(!finished_flag) { // ignore the polling loop, it's besides the point
sleep();
}
delete thrd;
}
void myclass::mymethod() {
// do stuff
finished_flag = true;
}
When myclass::mymethod() sets finished_flag to true, myclass::mymethod() hasn't returned yet. At the very least, it still has to execute a "return" instruction of some sort (if not much more: destructors, exception handler management, etc.). If the thread executing myclass::mymethod() gets pre-empted before that point, someFunction() will return to the calling program, and the calling program will unload the library. When the thread executing myclass::mymethod() gets scheduled to run again, the address containing the "return" instruction is no longer valid, and the program crashes.
The solution would be for someFunction() to call thrd->join() before returning. This would ensure that the thread has returned to the OS and is no longer executing.