Up until now I believed that MPI barrier was blocking. So when a process encounters an MPI barrier it should stop and wait for the others to arrive at the same barrier.
Then at some point I programmed the following bug:
[...]
if (my_rank==0) then
call mpi_barrier(mpi_comm_world,ierr)
end if
[...]
Now I would have expected that all other processes continue their work and will eventually hang at the next blocking mpi call. While process 0 waits indefinitely at the above barrier.
However this is not what happened. All processes continued to execute and they were all hanging at the next mpi call in the code. I printed out the rank of all the processes right before this second mpi call, so I know they all arrived at that point, including task 0.
I troubleshooted like crazy to find the error, until I finally looked back further and further in the code and found the above bug.
Has anyone ever encountered behavior like this? Do I have a misconception of mpi_barrier works? I am using mvapich2.2 and ifort15.
Thanks for all your help.
Related
I am working on a project where we have used pthread_create to create several child threads.
The thread creation logic is not in my control as its implemented by some other part of project.
Each thread perform some operation which takes more than 30 seconds to complete.
Under normal condition the program works perfectly fine.
But the problem occurs at the time of termination of the program.
I need to exit from main as quickly as possible when I receive the SIGINT signal.
When I call exit() or return from main, the exit handlers and global objects' destructors are called. And I believe these operations are having a race condition with the running threads. And I believe there are many race conditions, which is making hard to solve all of theses.
The way I see it there are two solutions.
call _exit() and forget all de-allocation of resources
When SIGINT is there, close/kill all threads and then call exit() from main thread, which will release resources.
I think 1st option will work, but I do not want to abruptly terminate the process.
So I want to know if it is possible to terminate all child threads as quickly as possible so that exit handler & destructor can perform required clean-up task and terminate the program.
I have gone through this post, let me know if you know other ways: POSIX API call to list all the pthreads running in a process
Also, let me know if there is any other solution to this problem
What is it that you need to do before the program quits? If the answer is 'deallocate resources', then you don't need to worry. If you call _exit then the program will exit immediately and the OS will clean up everything for you.
Be aware also that what you can safely do in a signal hander is extremely limited, so attempting to perform any cleanup yourself is not recommended. If you're interested, there's a list of what you can do here. But you can't flush a file to disk, for example (which is about the only thing I can think of that you might legitimately want to do here). That's off limits.
I need to exit from main as quickly as possible when I receive the SIGINT signal.
How is that defined? Because there's no way to "exit quickly as possible" when you receive one signal like that.
You can either set flag(s), post to semaphore(s), or similar to set a state that tells other threads it's time to shut down, or you can kill the entire process.
If you elect to set flag(s) or similar to tell the other threads to shut down, you set those flags and return from your signal handler and hope the threads behave and the process shuts down cleanly.
If you elect to kill threads, there's effectively no difference in killing a thread, killing the process, or calling _exit(). You might as well just keep it simple and call _exit().
That's all you can chose between when you have to make your decision in a single signal handler call. Pick one.
A better solution is to use escalating signals. For example, when you get SIGQUIT or SIGINT, you set flag(s) or otherwise tell threads it's time to clean up and exit the process - or else. Then, say five seconds later whatever is shutting down your process sends SIGTERM and the "or else" happens. When you get SIGTERM, your signal handler simply calls _exit() - those threads had their chance and they messed it up and that's their fault. Or you can call abort() to generate a core file and maybe provide enough evidence to fix the miscreant threads that won't shut down.
And finally, five seconds later the managing process will nuke the process from orbit with SIGKILL just to be sure.
I have a large C++ multi-thread Visual Studio framework. The main process launches a set of threads to run different routines simultaneously, and then waits for joining them. However, at run time, I experienced an unexpected behavior, as the main program terminates and closes before joining all the threads, but no asserts, exceptions or error messages are shown from the command line.
After several trials and debugging actions, I could isolate a single atomic change between a properly working behavior of the program (correct thread joining and termination) and the undesired one. In particular, I observed that the main program unexpectedly terminates after the end of a thread callback calling the Eigen method .row() on an Eigen matrix: the thread callback seems to correctly execute the related instruction (i.e., the output vector is successfully assigned to the selected row of the input matrix) and finish properly but, for some reason, the main thread is not able to join it and terminates immediately. If I substitute the call to the .row() method with an explicit element-wise assignment of the vector, this behavior does not occur, the main thread properly joins and the program continues and terminates as expected.
I don't really know if the issue that I'm experiencing is somehow caused by this Eigen method, but I couldn't find any other discriminating factor to debug the problem.
Does anybody have a possible idea or suggestion about the reason underlying this problem? I am aware that the formulation of my question is really general and could be due to a huge set of causes, but I am not even able to guess where I need to put the focus on to solve it.
Thanks in advance
I have a multithreaded application, in c++ running under Linux (Fedora 27). One of the threads keep reading data from a file on the local disk using low-level IO (open, read, etc.) and supplies that data to a buffer that is rotated between other threads.
Now, i suddenly ran into a strange problem where read() would start blocking infinitely for no apparent reason at arbitrary offset into the file. I added a monitor thread that would detect this block (by setting a timestamp before entering read() ) and attempt to shut down the program when it occurred.
The weird thing now, is that at the end of the main thread, it waits for pthread_join, and on that read thread - it returns 0 (success).
I tried again, but replaced the call to read() with a while(1); and now, pthread_join does not finish as expected.
I then examined the program in gdb, and to my surprise when i reach the pthread_join, the read thread is GONE!
Looking at info thread when the monitor thread detects a blocking read() the thread is still there, but at some point it disappears, and i can't catch it!
I'm trying to catch this thread exiting and i'm looking for ideas on how to do so. I am using pthread_cleanup_push/pop but my function is not being invoked by the read thread (all other threads do).
any ideas? i'm at my wits end!
edit ----------------------------------------
it appears to have something to do with syslog being called from a completely unrelated thread.
read is a cancellation point, so if your application calls pthread_cancel to terminate the thread at some point, the thread will cease to exist (after executing the cleanup actions). Joining a canceled thread succeeds and yields the special value PTHREAD_CANCELED for the void * value optionally filled out by pthread_join.
If you replace read with an endless loop, then there is no cancellation point, the cancellation request is not acted upon, and pthread_join will also wait indefinitely.
There was no direct and satisfactory answer found on quite a simple question:
Given multiple threads running is there a generic/correct way to wait on them to finish while exiting the process? Or "is doing timed wait Ok in this case?"
Yes, we attempt to signal threads to finish but it is observed that during process exit some of them tend to stall. We recently had a discussion and it was decided to rid of "arbitrary wait":
m_thread.quit(); // the way we had threads finished
m_thread.wait(kWaitMs); // with some significant expiration (~1000ms)
m_thread.quit(); // the way we have threads finished now
m_thread.wait(); // wait forever until finished
I understand that kWaitMs constant should be chosen somewhat proportional to one uninterrupted "job cycle" for the thread to finish. Say, if the thread processes some chunk of data for 10 ms then we should probably wait on it to respond to quit signal for 100 ms and if it still does not quit then we just don't wait anymore. We don't wait in that case as long as we quit the program and no longer care. But some engineers don't understand such "paradigm" and want an ultimate wait. Mind that the program process stuck in memory on the client machine will cause problems on the next program start in our case for sure not to mention that the log will not be properly finished to process as an error.
Can the question about the proper thread finishing on process quit be answered?
Is there some assistance from Qt/APIs to resolve the thread hang-up better, so we can log the reason for it?
P.S. Mind that I am well aware on why it is wrong to terminate the thread forcefully and how can that be done. This question I guess is not about synchronization but about limited determinism of threads that run tons of our and framework and OS code. The OS is not Real Time, right: Windows / MacOS / Linux etc.
P.P.S. All the threads in question have event loop so they should respond to QThread::quit().
Yes, we attempt to signal threads to finish but it is observed that
during process exit some of them tend to stall.
That is your real problem. You need to figure out why some of your threads are stalling, and fix them so that they do not stall and always quit reliably when they are supposed to. (The exact amount of time they take to quit isn't that important, as long as they do quit in a reasonable amount of time, i.e. before the user gets tired of waiting and force-quits the whole application)
If you don't/can't do that, then there is no way to shut down your app reliably, because you can't safely free up any resources that a thread might still be accessing. It is necessary to 100% guarantee that a thread has exited before the main thread calls the destructors of any objects that the thread uses (e.g. the QThread object associated with the thread)
So to sum up: don't bother playing games with wait-timeouts or forcibly-terminating threads; all that will get you is an application that sometimes crashes on shutdown. Use an indefinite-wait, and make sure your threads always (always!) quit after the main thread has asked them to, as that is the only way you'll achieve a reliable shutdown sequence.
How can signals be handled safley in and MPI application (for example SIGUSR1 which should tell the application that its runtime has expired and should terminate in the next 10 min.)
I have several constraints:
Finish all parallel/serial IO first befor quitting the application!
In all other circumstances the application can exit without any problem
How can this be achieved safely, no deadlocks while trying to exit, and properly leaving the current context jumping back to main() and calling MPI_FINALIZE() ?
Somehow the processes have to aggree on exiting (I think this is the same in multithreaded applicaitons) but how is that done efficiently without having to communicate to much? Is anybody aware of some standart way of doing this properly?
Below are some thought which might or might not work:
Idea 1:
Lets say for each process we catch the signal in a signal handler and push it on a "unhandled signals stack" (USS) and we simply return from the signal handler routine . We then have certain termination points in our application especially before and after IO operations which then handle all signals in USS.
If there is a SIGUSR1 in USS for example, each process would then exit at a termination point.
This idea has the problem that there could still be deadlocks, process 1 is just catching a singnal befor a termination point, while process 2 passed this point already and is now starting parallel IO. process 1 would exit, which results in a deadlock in process 2 (waiting for process 1 for IO which exited)...
Idea 2:
Only the master process 0 catches the signal in the signal handler and then sends a broadcast message : "all process exit!" at a specific point in the application. All processes receive the broadcast and throw and exception which is catched in main and MPI_FINALIZE is called.
This way the exit happens safely, but for the cost of having to receive continously broadcast message to see if we should exit or not
Thanks a lot!
If your goal is to stop all processes at the same point, then there is no way around always synchronizing at the possible termination points. That is, a collective call at the termination points is required.
Of course, you can try to avoid an extra broadcast by using the synchronization of another collective call to ensure proper termination, or piggy-pack the termination information on an existing broadcast, but I don't think that's worth it. After all, you only need to synchronize before I/O and at least once per ten minutes. At such a frequency, even a broadcast is not a performance problem.
Using signals in your MPI application in general is not safe. Some implementations may support it and others may not.
For instance, in MPICH, SIGUSR1 is used by the process manager for internal notification of abnormal failures.
http://lists.mpich.org/pipermail/discuss/2014-October/003242.html
Open MPI on the other had will forward SIGUSR1 and SIGUSR2 from mpiexec to the other processes.
http://www.open-mpi.org/doc/v1.6/man1/mpirun.1.php#sect14
Other implementations will differ. So before you go too far down this route, make sure that the implementation you're using can deal with it.