The library PETSc runs some test programs during configuration while checking the environment. One of those test programs is the following program (reduced by two relative headers):
#include <stdlib.h>
#include <mpi.h>
int main() {
int size;
int ierr;
MPI_Init(0,0);
ierr = MPI_Type_size(MPI_LONG_DOUBLE, &size);
if(ierr || (size == 0)) exit(1);
MPI_Finalize();
;
return 0;
}
Configuration fails, due to a timeout. When debugging the program, it gets stuck at the line MPI_Init(0, 0);, even though this line should be perfectly legal. I am using OpenMPI 2 with G++ 9.2.1, running on OpenSUSE TW.
The program is compiled using
mpicxx -O0 -g mpi_test.cpp -o mpi_test
Related
I am trying to run the following example MPI code that launches 20 threads and keeps those threads busy for a while. However, when I check the CPU utilization using a tool like nmon or top I see that only a single thread is being used.
#include <iostream>
#include <thread>
#include <mpi.h>
using namespace std;
int main(int argc, char *argv[]) {
int provided, rank;
MPI_Init_thread(&argc, &argv, MPI_THREAD_FUNNELED, &provided);
if (provided != MPI_THREAD_FUNNELED)
exit(1);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
auto f = [](float x) {
float result = 0;
for (float i = 0; i < x; i++) { result += 10 * i + x; }
cout << "Result: " << result << endl;
};
thread threads[20];
for (int i = 0; i < 20; ++i)
threads[i] = thread(f, 100000000.f); // do some work
for (auto& th : threads)
th.join();
MPI_Finalize();
return 0;
}
I compile this code using mpicxx: mpicxx -std=c++11 -pthread example.cpp -o example and run it using mpirun: mpirun -np 1 example.
I am using Open MPI version 4.1.4 that is compiled with posix thread support (following the explanation from this question).
$ mpicxx --version
g++ (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
$ mpirun --version
mpirun (Open MPI) 4.1.4
$ ompi_info | grep -i thread
Thread support: posix (MPI_THREAD_MULTIPLE: yes, OPAL support: yes, OMPI progress: no, ORTE progress: yes, Event lib: yes)
FT Checkpoint support: no (checkpoint thread: no)
$ mpicxx -std=c++11 -pthread example.cpp -o example
$ ./example
My CPU has 10 cores and 20 threads and runs the example code above without MPI on all 20 threads. So, why does the code with MPI not run on all threads?
I suspect I might need to do something with MPI bindings, which I see being mentioned in some answers on the same topic (1, 2), but other answers entirely exclude these options, so I'm unsure whether this is the correct approach.
mpirun -np 1 ./example assigns a single core to your program (so 20 threads end up time sharing): this is the default behavior for Open MPI (e.g. 1 core per MPI process when running with -np 1 or -np 2.
./example (e.g. singleton mode) should use all the available cores, unless you are already running on a subset.
If you want to use all the available cores with mpirun, you can
mpirun --bind-to none -np 1 ./example
Let's consider a program, as follows,
#include "mpi.h"
#include <stdio.h>
int main(int argc, char *argv[]){
int num_proc;
#ifdef MPI_VERSION
MPI_init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &num_proc);
MPI_Finalize();
#else
num_proc = 1;
#endif
printf("%d\n", num_proc);
}
I want to make it to be both MPI or non-MPI version.
That means when compiling and runing it without MPI linker, like below, num_proc is set to 1.
g++ main.cpp && ./a.out
While, if it is compiled and run with MPI linker, as below, num_proc is set to 2.
mpicxx main.cpp && mpiexec -n 2 ./a.out
Is this possible? How?
I have a small function in written Haskell with the following type:
foreign export ccall sget :: Ptr CInt -> CSize -> Ptr CSize -> IO (Ptr CInt)
I am calling this from multiple C++ threads running concurrently (via
TBB). During this part of the execution of my program I can barely get a
load average above 1.4 even though I'm running on a six-core CPU (12
logical cores). I therefore suspect that either the calls into Haskell all
get funnelled through a single thread, or there is some significant
synchronization going on.
I am not doing any such thing explicitly, all the function does is operate
on the incoming data (after storing it into a Data.Vector.Storable), and
return the result back as a newly allocated array (from Data.Marshal.Array).
Is there anything I need to do to fully enable concurrent calls like this?
I am using GHC 8.6.5 on Debian Linux (bullseye/testing), and I am compiling with -threaded -O2.
Looking forward to reading some advice,
Sebastian
Using the simple example at the end of this answer, if I compile with:
$ ghc -O2 Worker.hs
$ ghc -O2 -threaded Worker.o caller.c -lpthread -no-hs-main -o test
then running it with ./test occupies only one core at 100%. I need to run it with ./test +RTS -N, and then on my 4-core desktop, it runs at 400% with a load average of around 4.0.
So, the RTS -N flag affects the number of parallel threads that can simultaneously run an exported Haskell function and there is no special action required (other than compiling with -threaded and running with +RTS -n) to fully utilize all available cores.
So, there must be something about your example that's causing the problem. It could be contention between threads over some shared data structure. Or, maybe parallel garbage collection is causing problems; I've observed parallel GC causing worse performance with increasing -N in a simple test case (details forgotten, sadly), so you could try turning off parallel GC with -qg or limiting the number of cores involved with -qn2 or something. To enable these options, you need to call hs_init_with_rtsopts() in place of the usual hs_init() as in my example.
If that doesn't work, I think you'll have to try to narrow down the problem and post a minimal example that illustrates the performance issue to get more help.
My example:
caller.c
#include "HsFFI.h"
#include "Rts.h"
#include "Worker_stub.h"
#include <pthread.h>
#define NUM_THREAD 4
void*
work(void* arg)
{
for (;;) {
fibIO(30);
}
}
int
main(int argc, char **argv)
{
hs_init_with_rtsopts(&argc, &argv);
pthread_t threads[NUM_THREAD];
for (int i = 0; i < NUM_THREAD; ++i) {
int rc = pthread_create(&threads[i], NULL, work, NULL);
}
for (int i = 0; i < NUM_THREAD; ++i) {
pthread_join(threads[i], NULL);
}
hs_exit();
return 0;
}
Worker.hs
module Worker where
import Foreign
fibIO :: Int -> IO Int
fibIO = return . fib
fib :: Int -> Int
fib n | n > 1 = fib (n-1) + fib (n-2)
| otherwise = 1
foreign export ccall fibIO :: Int -> IO Int
I prepared a C++ interface to a legacy Fortran library.
Some subroutines in the legacy library follow an ugly but usable status code convention to report errors, and I use such status codes to throw a readable exception from my C++ code: it works great.
On the other hand, sometimes the legacy library calls STOP (which terminates the program). And it often does it even though the condition is recoverable.
I would like to capture this STOP from within C++, and so far I have been unsuccessful.
The following code is simple, but exactly represents the problem at hand:
The Fortran legacy library fmodule.f90:
module fmodule
use iso_c_binding
contains
subroutine fsub(x) bind(c, name="fsub")
real(c_double) x
if(x>=5) then
stop 'x >=5 : this kills the program'
else
print*, x
end if
end subroutine fsub
end module fmodule
The C++ Interface main.cpp:
#include<iostream>
// prototype for the external Fortran subroutine
extern "C" {
void fsub(double& x);
}
int main() {
double x;
while(std::cin >> x) {
fsub(x);
}
return 0;
}
The compilation lines (GCC 4.8.1 / OS X 10.7.4; $ denotes command prompt ):
$ gfortran -o libfmodule.so fmodule.f90 -shared -fPIC -Wall
$ g++ main.cpp -L. -lfmodule -std=c++11
The run:
$ ./a.out
1
1.0000000000000000
2
2.0000000000000000
3
3.0000000000000000
4
4.0000000000000000
5
STOP x >=5 : this kills the program
How could I capture the STOP and, say, request another number. Notice that I do not want to touch the Fortran code.
What I have tried:
std::atexit: cannot "come back" from it once I have entered it
std::signal: STOP does not seem to throw a signal which I can capture
You can solve your problem by intercepting the call to the exit function from the Fortran runtime. See below. a.out is created with your code and the compilation lines you give.
Step 1. Figure out which function is called. Fire up gdb
$ gdb ./a.out
GNU gdb (GDB) Red Hat Enterprise Linux (7.2-60.el6_4.1)
[...]
(gdb) break fsub
Breakpoint 1 at 0x400888
(gdb) run
Starting program: a.out
5
Breakpoint 1, 0x00007ffff7dfc7e4 in fsub () from ./libfmodule.so
(gdb) step
Single stepping until exit from function fsub,
which has no line number information.
stop_string (string=0x7ffff7dfc8d8 "x >=5 : this kills the programfmodule.f90", len=30) at /usr/local/src/gcc-4.7.2/libgfortran/runtime/stop.c:67
So stop_string is called. We need to know to which symbol this function corresponds.
Step 2. Find the exact name of the stop_string function. It must be in one of the shared libraries.
$ ldd ./a.out
linux-vdso.so.1 => (0x00007fff54095000)
libfmodule.so => ./libfmodule.so (0x00007fa31ab7d000)
libstdc++.so.6 => /usr/local/gcc/4.7.2/lib64/libstdc++.so.6 (0x00007fa31a875000)
libm.so.6 => /lib64/libm.so.6 (0x0000003da4000000)
libgcc_s.so.1 => /usr/local/gcc/4.7.2/lib64/libgcc_s.so.1 (0x00007fa31a643000)
libc.so.6 => /lib64/libc.so.6 (0x0000003da3c00000)
libgfortran.so.3 => /usr/local/gcc/4.7.2/lib64/libgfortran.so.3 (0x00007fa31a32f000)
libquadmath.so.0 => /usr/local/gcc/4.7.2/lib64/libquadmath.so.0 (0x00007fa31a0fa000)
/lib64/ld-linux-x86-64.so.2 (0x0000003da3800000)
I found it in (no surprise) the fortran runtime.
$ readelf -s /usr/local/gcc/4.7.2/lib64/libgfortran.so.3|grep stop_string
1121: 000000000001b320 63 FUNC GLOBAL DEFAULT 11 _gfortran_stop_string##GFORTRAN_1.0
2417: 000000000001b320 63 FUNC GLOBAL DEFAULT 11 _gfortran_stop_string
Step 3. Write a function that will replace that function
I look for the precise signature of the function in the source code (/usr/local/src/gcc-4.7.2/libgfortran/runtime/stop.c see gdb session)
$ cat my_exit.c
#define _GNU_SOURCE
#include <stdio.h>
void _gfortran_stop_string (const char *string, int len)
{
printf("Let's keep on");
}
Step 4. Compile a shared object exporting that symbol.
gcc -Wall -fPIC -c -o my_exit.o my_exit.c
gcc -shared -fPIC -Wl,-soname -Wl,libmy_exit.so -o libmy_exit.so my_exit.o
Step 5. Run the program with LD_PRELOAD so that our new function has precedence over the one form the runtime
$ LD_PRELOAD=./libmy_exit.so ./a.out
1
1.0000000000000000
2
2.0000000000000000
3
3.0000000000000000
4
4.0000000000000000
5
Let's keep on 5.0000000000000000
6
Let's keep on 6.0000000000000000
7
Let's keep on 7.0000000000000000
There you go.
Since what you want would result in non-portable code anyway, why not just subvert the exit mechanism using the obscure long jump mechanism:
#include<iostream>
#include<csetjmp>
#include<cstdlib>
// prototype for the external Fortran subroutine
extern "C" {
void fsub(double* x);
}
volatile bool please_dont_exit = false;
std::jmp_buf jenv;
static void my_exit_handler() {
if (please_dont_exit) {
std::cout << "But not yet!\n";
// Re-register ourself
std::atexit(my_exit_handler);
longjmp(jenv, 1);
}
}
void wrapped_fsub(double& x) {
please_dont_stop = true;
if (!setjmp(jenv)) {
fsub(&x);
}
please_dont_stop = false;
}
int main() {
std::atexit(my_exit_handler);
double x;
while(std::cin >> x) {
wrapped_fsub(x);
}
return 0;
}
Calling longjmp jumps right in the middle of the line with the setjmp call and setjmp returns the value passed as the second argument of longjmp. Otherwise setjmp returns 0. Sample output (OS X 10.7.4, GCC 4.7.1):
$ ./a.out
2
2.0000000000000000
6
STOP x >=5 : this kills the program
But not yet!
7
STOP x >=5 : this kills the program
But not yet!
4
4.0000000000000000
^D
$
No library preloading required (which anyway is a bit more involved on OS X than on Linux). A word of warning though - exit handlers are called in reverse order of their registration. One should be careful that no other exit handlers are registered after my_exit_handler.
Combining the two answers that use a custom _gfortran_stop_string function and longjmp, I thought that raising an exception inside the custom function would be similar, then catch in in the main code. So this came out:
main.cpp:
#include<iostream>
// prototype for the external Fortran subroutine
extern "C" {
void fsub(double& x);
}
int main() {
double x;
while(std::cin >> x) {
try { fsub(x); }
catch (int rc) { std::cout << "Fortran stopped with rc = " << rc <<std::endl; }
}
return 0;
}
catch.cpp:
extern "C" {
void _gfortran_stop_string (const char*, int);
}
void _gfortran_stop_string (const char *string, int len)
{
throw 666;
}
Then, compiling:
gfortran -c fmodule.f90
g++ -c catch.cpp
g++ main.cpp fmodule.o catch.o -lgfortran
Running:
./a.out
2
2.0000000000000000
3
3.0000000000000000
5
Fortran stopped with rc = 666
6
Fortran stopped with rc = 666
2
2.0000000000000000
3
3.0000000000000000
^D
So, seems to work :)
I suggest you fork your process before calling the fortran code and exit 0 (edit: if STOP exits with zero, you will need a sentinel exit code, clanky but does the job) after the fortran execution. That way every fortran call will finish in the same way: the same as if it had stopped. Or, if "STOP" ensure an error, throw the exception when the fortran code stops and send some other message when the fortran execution "completes" normaly.
Below is an example inspire from you code assuming a fortran "STOP" is an error.
int main() {
double x;
pid_t pid;
int exit_code_normal = //some value that is different from all STOP exit code values
while(std::cin >> x) {
pid = fork();
if(pid < 0) {
// error with the fork handle appropriately
} else if(pid == 0) {
fsub(x);
exit(exit_code_normal);
} else {
wait(&status);
if(status != exit_code_normal)
// throw your error message.
}
}
return 0;
}
The exit code could be a constant instead of a variable. I don't think it matters much.
Following a comment, it occurs that the result from the execution would be lost if it sits in the memory of the process (rather than, say, write to a file). If it is the case, I can think of 3 possibilities:
The fortran code messes a whole lot of memory during the call and letting the execution continue beyond the STOP is probably not a good idea in the first place.
The fortran code simply return some value (through it's argument if my fortran is not too rusty) and this could be relayed back to the parent easily through a shared memory space.
The execution of the fortran subroutine acts on an external system (ex: writes to a file) and no return values are expected.
In the 3rd case, my solution above works as is. I prefer it over some other suggested solution mainly because: 1) you don't have to ensure the build process is properly maintained 2) fortran "STOP" still behave as expected and 3) it requires very few lines of code and all the "fortran STOP workaround" logic sits in one single place. So in terms of long term maintenance, I much prefer that.
In the 2nd case, my code above needs small modification but still holds the advantages enumerated above at the price of minimal added complexity.
In the 1st case, you will have to mess with the fortran code no matter what.
I have setup a chrooted Debian Etch (32bit) under Ubuntu 12.04 (64bit), and it appears that clock_gettime() works with CLOCK_MONOTONIC, but fails with both CLOCK_PROCESS_CPUTIME_ID and CLOCK_THREAD_CPUTIME_ID. The errno is set to EINVAL, which according to the man page means that "The clk_id specified is not supported on this system."
All three clocks work fine outside the chrooted Debian and in 64bit chrooted Debian etch.
Can someone explains to me why this is the case and how to fix it?
Much appreciated.
I don't know the cause yet, but I have ideas that won't fit in the comment box.
First, you can make the test program simpler by compiling it as C instead of C++ and not linking it to libpthread. -lrt should be good enough to get clock_gettime. Also, compiling it with -static could make tracing easier since the dynamic linker startup stuff won't be there.
Static linking might even change the behavior of clock_gettime. It's worth trying just to find out whether it works around the bug.
Another thing I'd like to see is the output of this vdso-bypassing test program:
#define _GNU_SOURCE
#include <stdio.h>
#include <time.h>
#include <unistd.h>
#include <sys/syscall.h>
int main(void)
{
struct timespec ts;
if(syscall(SYS_clock_gettime, CLOCK_PROCESS_CPUTIME_ID, &ts)) {
perror("clock_gettime");
return 1;
}
printf("CLOCK_PROCESS_CPUTIME_ID: %lu.%09ld\n",
(unsigned long)ts.tv_sec, ts.tv_nsec);
return 0;
}
with and without -static, and if it fails, add strace.
Update (actually, skip this. go to the second update)
A couple more simple test ideas:
compile and run a 32-bit test program in the Ubuntu host system, by adding -m32 to the gcc command. It's possible that the kernel's 32-bit compatibility mode is causing the error. If that's the case, then the 32-bit version will fail no matter which libc it gets linked to.
take the non-static test programs you compiled under Debian, copy them to the Ubuntu host system and try to run them there. Change in behavior will point to libc as the cause.
Then it's time for the hard stuff. Looking at disassembled code and maybe single-stepping it in gdb. Instead of having you do that on your own, I'd like to get a copy of the code you're running. Upload a a static-compiled failing test program somewhere I can get it. Also a copy of the 32-bit vdso provided by your kernel might be interesting. To extract the vdso, run the following program (compiled in the 32-bit chroot) which will create a file called vdso.dump, and upload that too.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
static int getvseg(const char *which, const char *outfn)
{
FILE *maps, *outfile;
char buf[1024];
void *start, *end;
size_t sz;
void *copy;
int ret;
char search[strlen(which)+4];
maps = fopen("/proc/self/maps", "r");
if(!maps) {
perror("/proc/self/maps");
return 1;
}
outfile = fopen(outfn, "w");
if(!outfile) {
perror(outfn);
fclose(maps);
return 1;
}
sprintf(search, "[%s]\n", which);
while(fgets(buf, sizeof buf, maps)) {
if(strlen(buf)<strlen(search) ||
strcmp(buf+strlen(buf)-strlen(search),search))
continue;
if(sscanf(buf, "%p-%p", &start, &end)!=2) {
fprintf(stderr, "weird line in /proc/self/maps: %s", buf);
continue;
}
sz = (char *)end - (char *)start;
/* copy because I got an EFAULT trying to write directly from vsyscall */
copy = malloc(sz);
if(!copy) {
perror("malloc");
goto fail;
}
memcpy(copy, start, sz);
if(fwrite(copy, 1, sz, outfile)!=sz) {
if(ferror(outfile))
perror(outfn);
else
fprintf(stderr, "%s: short write", outfn);
free(copy);
goto fail;
}
free(copy);
goto success;
}
fprintf(stderr, "%s not found\n", which);
fail:
ret = 1;
goto out;
success:
ret = 0;
out:
fclose(maps);
fclose(outfile);
return ret;
}
int main(void)
{
int ret = 1;
if(!getvseg("vdso", "vdso.dump")) {
printf("vdso dumped to vdso.dump\n");
ret = 0;
}
if(!getvseg("vsyscall", "vsyscall.dump")) {
printf("vsyscall dumped to vsyscall.dump\n");
ret = 0;
}
return ret;
}
Update 2
I reproduced this by downloading an etch libc. It's definitely caused be glibc stupidity. Instead of a simple syscall wrapper for clock_gettime it has a big wad of preprocessor spaghetti culminating in "you can't use clockid's that we didn't pre-approve". You're not going to get it to work with that old glibc. Which brings us to the question I didn't want to ask: why are you trying to use an obsolete version of Debian anyway?