In place mpi_reduce crashes with OpenMPI - fortran

Whenever I try to call mpi_reduce with mpi_in_place as the send buffer it crashes. A trawl of google reveals this to be have been a problem on Mac OS for OMPI 1.3.3 - but I'm on CentOS with OMPI 1.6.3 (with gfortran 4.4.6).
The following program crashes:
PROGRAM reduce
USE mpi
IMPLICIT NONE
REAL, DIMENSION(2, 3) :: buffer, gbuffer
INTEGER :: ierr, me_world
INTEGER :: buf_shape(2), counts
CALL mpi_init(ierr)
CALL mpi_comm_rank(mpi_comm_world, me_world, ierr)
buffer = 1.
IF (me_world .EQ. 0) PRINT*, "buffer: ", buffer
buf_shape = SHAPE(buffer)
counts = buf_shape(1)*buf_shape(2)
CALL mpi_reduce(MPI_IN_PLACE, buffer, counts, mpi_real, mpi_sum, 0, mpi_comm_world, ierr)
IF (me_world .EQ. 0) PRINT*, "buffer: ", buffer
CALL mpi_finalize(ierr)
END PROGRAM reduce
The MPI error is:
MPI_ERR_ARG: invalid argument of some other kind
which is not very helpful.
Am I missing something as to how mpi_reduce should be called? Does this work with other compilers/MPI implementations?

You are missing a very important part of how the in-place reduction operation works in MPI (see the bolded text):
When the communicator is an intracommunicator, you can perform a reduce operation in-place (the output buffer is used as the input buffer). Use the variable MPI_IN_PLACE as the value of the root process sendbuf. In this case, the input data is taken at the root from the receive buffer, where it will be replaced by the output data.
The other processes still have to supply their local buffers as sendbuf, not MPI_IN_PLACE:
IF (me_world == 0) THEN
CALL mpi_reduce(MPI_IN_PLACE, buffer, counts, MPI_REAL, MPI_SUM, 0, MPI_COMM_WORLD, ierr)
ELSE
CALL mpi_reduce(buffer, buffer, counts, MPI_REAL, MPI_SUM, 0, MPI_COMM_WORLD, ierr)
END IF
You can safely pass buffer as both sendbuf and recvbuf in non-root processes since MPI_REDUCE does not write to recvbuf in those processes.

Related

Allocation of pointers in MPI collective communications

I wonder how MPI collective communications such as Bcast, Scatter, Gather etc. behave when the send buffer is allocated in root but it is not allocated in the other ranks.
For example:
rowptr = (int*)malloc(sizeof(int) * (row_count + 1));
MPI_Scatterv(all_rows, rowCounts, rowDispls, MPI_INT,
rowptr, row_count, MPI_INT, MASTER, MPI_COMM_WORLD);
Where all_rows is only allocated in MASTER (rank == 0) process. What is the behavior of MPI under this situation.
Or in the following case;
MPI_Scatter(eCounts, 1, MPI_INT, &elm_count, 1, MASTER, MPI_COMM_WORLD);
where eCounts is int[] and elm_count is int, but eCount is allocated only in MASTER.
Should I also allocate send buffers even if they are not used in other ranks?
From the MPI 3.1 standard (chapter 5.6 page 160)
The send buffer is ignored for all non-root processes.
[...]
All arguments to the function are significant on process root, while on other processes, only arguments recvbuf, recvcount, recvtype, root, and comm are significant.
Same story for MPI_Gather() but replace recv* with send*.
All arguments are significant in the case of MPI_Bcast() (the buffer is a send buffer on the root rank, and a receive buffer on the other ranks).

MPI Programming in C - MPI_Send() and MPI_Recv() Address Trouble

I'm currently working on a C program using MPI, and I've run into a roadblock regarding the MPI_Send() and MPI_Recv() functions, that I hope you all can help me out with. My goal is to send (with MPI_Send()), and receive (with MPI_Recv()), the address of "a[0][0]" (Defined Below), and then display the CONTENTS of that address after I've received it from MPI_Recv(), in order to confirm my send and receive is working. I've outlined my problem below:
I have a 2-d array, "a", that works like this:
a[0][0] Contains my target ADDRESS
*a[0][0] Contains my target VALUE
i.e. printf("a[0][0] Value = %3.2f, a[0][0] Address = %p\n", *a[0][0], a[0][0]);
So, I run my program and memory is allocated for a. Debug confirms that a[0][0] contains the address 0x83d6260, and the value stored at address 0x83d6260, is 0.58. In other words, "a[0][0] = 0x83d6260", and "*a[0][0] = 0.58".
So, I pass the address, "a[0][0]", as the first parameter of MPI_Send():
-> MPI_Send(a[0][0], 1, MPI_FLOAT, i, 0, MPI_COMM_WORLD);
// I put 1 as the second parameter becasue I only want to receive this one address
MPI_Send() executes and returns 0, which is MPI_SUCCESS, which means that it succeeded, and my Debug confirms that "0x83d6260" is the address passed.
However, when I attempt to receive the address by using MPI_Recv(), I get Segmentation fault:
MPI_Recv(a[0][0], 1, MPI_FLOAT, iNumProcs-1, 0, MPI_COMM_WORLD, &status);
The address 0x83d6260 was sent successfully using MPI_Send(), but I can't receive the same address with MPI_Recv(). My question is - Why does MPI_Recv() cause a segment fault? I want to simply print the value contained in a[0][0] immediately after the MPI_Recv() call, but the program crashes.
MPI_Send(a[0][0], 1, MPI_FLOAT ...) will send memory with size sizeof(float) starting at a[0][0]
So basicaly the value sent is *(reinterpret_cast<float*>(a[0][0]))
Therefore if a[0][0] is 0x0x83d6260 and *a[0][0] is 0.58f then MPI_Recv(&buff, 1, MPI_FLOAT...) will set buffer (of type float, which need to be allocated) to 0.58
On important thing is that different MPI programm should NEVER share pointers (even if they run on the same node). They do not share virtual memory pagination and event if you where able to acces the adress from one on the rank, the other ones should give you a segfault if you try to access the same adress in their context
EDIT
This code works for me :
#include <stdio.h>
#include <stdlib.h>
#include "mpi.h"
int main(int argc, char* argv[])
{
int size, rank;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
switch(rank)
{
case 0:
{
float*** a;
a = malloc(sizeof(float**));
a[0] = malloc(sizeof(float* ));
a[0][0] = malloc(sizeof(float ));
*a[0][0] = 0.58;
MPI_Send(a[0][0], 1, MPI_FLOAT, 1, 0, MPI_COMM_WORLD);
printf("rank 0 send done\n");
free(a[0][0]);
free(a[0] );
free(a );
break;
}
case 1:
{
float buffer;
MPI_Recv(&buffer, 1, MPI_FLOAT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
printf("rank 1 recv done : %f\n", buffer);
break;
}
}
MPI_Finalize();
return 0;
}
results are :
mpicc mpi.c && mpirun ./a.out -n 2
> rank 0 send done
> rank 1 recv done : 0.580000
I think the problem is that you're trying to put the value into the array of pointers (which is probably causing the segfault). Try making a new buffer to receive the value:
MPI_Send(a[0][0], 1, MPI_FLOAT, i, 0, MPI_COMM_WORLD);
....
double buff;
MPI_Recv(&buff, 1, MPI_FLOAT, iNumProcs-1, 0, MPI_COMM_WORLD, &status);
If I remember correctly the MPI_Send/Recv will dereference the pointer giving you the value, not the address.
You also haven't given us enough information to tell if your source/destination values are correct.

mpi hello world not working

I write simple hello-world program on Visual c++ 2010 express with MPI library and cant understand, why my code not working.
MPI_Init( NULL, NULL );
MPI_Comm_size(MPI_COMM_WORLD,&size);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
int a, b = 5;
MPI_Status st;
MPI_Send( &b, 1, MPI_INT, 0,0, MPI_COMM_WORLD );
MPI_Recv( &a, 1, MPI_INT, 0,0, MPI_COMM_WORLD, &st );
MPI_Send tells me "DEADLOCK: attempting to send a message to the local process without a prior matching receive". If i write Recv first, program stucks there (no data, blocking receive).
What i`m doint wrong?
My studio is visual c++ 2010 express. MPI from HPC SDK 2008 (32 bit).
You need something like this:
assert(size >= 2);
if (rank == 0)
MPI_Send( &b, 1, MPI_INT, 1,0, MPI_COMM_WORLD );
if (rank == 1)
MPI_Recv( &a, 1, MPI_INT, 0,0, MPI_COMM_WORLD, &st );
The idea of MPI is that the whole system operates in lockstep. And sometimes you do need to be aware of which participant you are in the "world." In this case, assuming you have two members (as per my assert), you need to make one of them send and the other receive.
Note also that I changed the "dest" parameter of the send, because 0 needs to send to 1 therefore 1 needs to receive from 0.
You can later do it the other way around if you wish (if each needs to tell the other something), but in such a case you may find even more efficient ways to do it using "collective operations" where you can exchange (both send and receive) with all the peers.
In your example code, you're sending to and receiving from rank 0. If you are only running your MPI program with 1 process (which makes no sense, but we'll accept it for the sake of argument), you could make this work by using non-blocking calls instead of the blocking version. It would change your program to look like this:
MPI_Init( NULL, NULL );
MPI_Comm_size(MPI_COMM_WORLD,&size);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
int a, b = 5;
MPI_Status st[2];
MPI_Request request[2];
MPI_Isend( &b, 1, MPI_INT, 0,0, MPI_COMM_WORLD, &request[0] );
MPI_Irecv( &a, 1, MPI_INT, 0,0, MPI_COMM_WORLD, &request[1] );
MPI_Waitall( request, st );
That would let both the send and the receive complete at the same time. The reason your MPI version doesn't like your original code (which is very nice of it to tell you such a thing) is because the call to MPI_SEND could block until the matching MPI_RECV is done, which in this case wouldn't occur because it would only get called after the MPI_SEND is over, which is a circular dependency.
In MPI, when you add an 'I' before an MPI call, it means "Immediate", as in, the call will return immediately and complete all the work later, when you call MPI_WAIT (or some version of it, like MPI_WAITALL in this example). So what we did here was to make the send and receive return immediately, basically just telling MPI that we intend to do a send and receive with rank 0 at some point in the future, then later (the next line), we tell MPI to go ahead and finish those calls now.
The benefit of using the immediate version of these calls is that theoretically, MPI can do some things in the background to let the send and receive calls make progress while your application is doing something else that doesn't rely on the result of that data. Then, when you finish the call to MPI_WAIT* later, the data is available and you can do whatever you need to do.

MPI_REDUCE error

I dont understand why the following program is not working. When I am running it with "mpirun -np 2 a.out" I would expect it to print "N: 2" but instead it is giving me a seg fault.
Thank you
main.f
program main
implicit none
include 'mpif.h'
integer me, ngs,ierror
call inimpi(me, ngs)
call calc
call mpi_finalize( ierror )
stop
end
inimpi.f
subroutine inimpi(me, ngs)
include 'mpif.h'
integer me, ngs, ierror
call mpi_init( ierror )
call mpi_comm_rank( mpi_comm_world, me, ierror )
call mpi_comm_size( mpi_comm_world, ngs, ierror )
return
end
calc.f
subroutine calc
include 'mpif.h'
integer p, e, ierror
p = 1
call mpi_reduce(p, e, 1, mpi_integer,
& mpi_sum, mpi_comm_world, ierror)
print *, "N: ", e
return
end
Taken from the mpich2 documentation:
int MPI_Reduce(void *sendbuf, void *recvbuf, int count, MPI_Datatype datatype,
MPI_Op op, int root, MPI_Comm comm)
You didn't specify the root for mpi_reduce. Because of this, mpi_comm_world is used as root and ierror is used as MPI_Comm. Did you mean to use MPI_Allreduce, which doesn't need a root argument?
Oh, and try to use use mpi instead of include 'mpif.h' if possible, this might have even caught the current error.

MPI_Isend/Recv- Is there a deadlock?

I have a total of 8 messages being passed on 4 nodes using MPI. I noticed that there were two messages whose arrays did not provide meaningful results. I have copied an excerpt of the code below? These are some related questions I had based on the code/results below:
Does the MPI_Isend also require a wait? I am not sure if there is a deadlock. I also tried just passing these two variables from one node to the other, and the array values were still NULL.
Will MPI_SendRecv improve the efficiency of the code as suggested here Non Blocking communication in MPI and MPI Wait Issue. Not all information is passed correctly? If so, how/why? Would also appreciate some pointers on setting that up.
Thanks!
Source Code:
if ((my_rank) == 0)
{
MPI_Irecv(A, Rows, MPI_DOUBLE, my_rank+1, MPI_ANY_TAG, MPI_COMM_WORLD, &request[6]);
MPI_Wait(&request[6], &status[6]);
}
if ((my_rank) == 1)
{
MPI_Isend(AA, Rows, MPI_DOUBLE, my_rank-1, 0, MPI_COMM_WORLD, &request[6]);
}
if ((my_rank) == 2)
{
MPI_Isend(B, Rows, MPI_DOUBLE, my_rank+1, 0, MPI_COMM_WORLD, &request[7]);
}
if ((my_rank) == 3)
{
MPI_Irecv(BB, Rows, MPI_DOUBLE, my_rank-1, MPI_ANY_TAG, MPI_COMM_WORLD, &request[7]);
MPI_Wait(&request[7], &status[7]);
}
Yes, All non-blocking calls (MPI_Isend, MPI_Irecv etc) require a matching MPI_Wait. The call is not guaranteed to complete until MPI_Wait is called. You should not change the contents of the buffer until after MPI_Wait returns.
https://computing.llnl.gov/tutorials/mpi/
To use SendRecv, same task has to send a message and wait to receive a message. That pattern doesnt hold true for your code.