OpenMPI Reduce using MINLOC - c++

I'm currently working on some MPI code for a graph theory problem in which a number of nodes can each contain an answer and the length of that answer. To get everything back to the master node I'm doing an MPI_Gather for the answers and am attempting to do an MPI_Reduce using the MPI_MINLOC operation to figure out who had the shortest solution. Right now my datatype that stores the length and node ID is defined as (per examples shown on numerous sites like http://www.open-mpi.org/doc/v1.4/man3/MPI_Reduce.3.php):
struct minType
{
float len;
int index;
};
On each node I'm initializing the local copies of this struct in the following manner:
int commRank;
MPI_Comm_rank (MPI_COMM_WORLD, &commRank);
minType solutionLen;
solutionLen.len = 1e37;
solutionLen.index = commRank;
At the end of the execution I have an MPI_Gather call that successfully pulls down all of the solutions (I've printed them out from in memory to verify them), and the call:
MPI_Reduce (&solutionLen, &solutionLen, 1, MPI_FLOAT_INT, MPI_MINLOC, 0, MPI_COMM_WORLD);
It's my understanding that the arguments are supposed to be:
The data source
is the target for the result (only significant on the designated root node)
The number of items sent by each node
The datatype (MPI_FLOAT_INT appears to be defined based on the above link)
The operation (MPI_MINLOC appears to be defined as well)
The root node's ID in the specified comm group
The communications group to wait on.
When my code makes it to the reduce operation I get this error:
[compute-2-19.local:9754] *** An error occurred in MPI_Reduce
[compute-2-19.local:9754] *** on communicator MPI_COMM_WORLD
[compute-2-19.local:9754] *** MPI_ERR_ARG: invalid argument of some other kind
[compute-2-19.local:9754] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 9754 on
node compute-2-19.local exiting improperly. There are two reasons this could occur:
1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.
2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"
This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
I'll admit to being completely stumped on this. In case it matters I'm compiling using OpenMPI 1.5.3 (built using gcc 4.4) on a Rocks cluster based on CentOS 5.5.

I think you are not allowed to use the same buffer for input and output (first two arguments). The man page says:
When the communicator is an intracommunicator, you can perform a
reduce operation in-place (the output buffer is used as the input
buffer). Use the variable MPI_IN_PLACE as the value of the root
process sendbuf. In this case, the input data is taken at the root
from the receive buffer, where it will be replaced by the output data.

Related

How does one send custom MPI_Datatype over to a different process?

Suppose that I create custom MPI_Datatypes for subarrays of different sizes on each of the MPI processes allocated to a program. Now I wish to send these subarrays to the master process and assemble them into a bigger array block by block. The master process is unaware of the individual datatypes (defined by the local sizes) on the other processes. Naively, therefore, I might attempt to send over these custom datatypes to the master process in the following manner.
MPI_Datatype localarr_type;
MPI_Type_create_subarray( NDIMS, array_size, local_size, box_corner, MPI_ORDER_C, MPI_FLOAT, &localarr_type );
MPI_Type_Commit(&localarr_type);
if (rank == master)
{
for (int id = 1; id < nprocs; ++id)
{
MPI_Recv( &localarr_type, 1, MPI_Datatype, id, tag1[id], comm_cart, MPI_STATUS_IGNORE );
MPI_Recv( big_array, 1, localarray_type, id, tag2[id], comm_cart, MPI_STATUS_IGNORE );
}
}
else
{
MPI_Send( &localarr_type, 1, MPI_Datatype, master, tag1[rank], comm_cart );
MPI_Send( local_box, 1, localarr_type, master, tag2[rank], comm_cart );
}
However, this results in a compilation error with the following error message from the GNU and CLANG compilers, and the latter error message from the Intel compiler.
/* GNU OR CLANG COMPILER */
error: unexpected type name 'MPI_Datatype': expected expression
/* INTEL COMPILER */
error: type name is not allowed
This means that either (1) I am attempting to send a custom MPI_Datatype over to a different process in the wrong way or that (2) this is not possible at all. I would like to know which it is, and if it is (1), I would like to know what the correct way of communicating a custom MPI_Datatype is. Thank you.
Note.
I am aware of other ways of solving the above problem without needing to communicate MPI_Datatypes. For example, one could communicate the local array sizes and manually reconstruct the MPI_Datatype from other processes inside the master process before using it in the subsequent communication of subarrays. This is not what I am looking for.
I wish to communicate the custom MPI_Datatype itself (as shown in the example above), not something that is an instance of the datatype (which is doable, as also shown in the example code above).
First of all: You can not send a datatype like that. The value MPI_Datatype is not a value of type MPI_Datatype. (It's a cute idea though.) You could send the parameters with which it is constructed, and the reconstruct it on the sending type.
However, you are probably misunderstanding the nature of MPI. In your code, with the same datatype on workers and manager, you are sort of assuming that everyone has data of the same size/shape. That is not compatible with the manager gathering everything together.
If you're gathering data on a manager process (usually not a good idea: are you really sure you need that?) then the contributing processes have the data in a small array, say at index 0..99. So you can send them as an ordinary contiguous buffer. The "manager" has a much larger array, and places all the contributions in disjoint locations. So at most the manager needs to create subarray types to indicate where the received data goes in the big array.

Error with mpi_comm_split in fortran

I have some questions on mpi_comm_split in Fortran.
Question I)
How can I create a single one communicator with mpi_comm_split? For example, I want to create a communicator based only on processors which are on the top of my domain (Cartesian). I know that I have to use MPI_UNDEFINED for process that I don't want to be part of my new communicator, but my code below didn't make want I expect.
do k=1,size(proc_up)
if(rank==proc_up(k)) then
color_up=1
else
color_up=MPI_UNDEFINED
end if
call MPI_COMM_SPLIT(comm2d ,color_up ,coords(2) ,comm_up ,code)
Why it didn't work?
Question II)
When I want to make several MPI_COMM_SPLIT (new comm for up, down, side1, side2), it returns an error:
[nin:30039] *** An error occurred in MPI_Comm_split
[nin:30039] *** on communicator MPI_COMM_WORLD
[nin:30039] *** MPI_ERR_ARG: invalid argument of some other kind
[nin:30039] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort
Does anyone know why?
QUESTION III)
I can also use MPI_Cart_sub, but it will returns me many groups of process. How to be sure to use only the group I just want?

How to find the origin of MPI message truncated errors?

I am currently having problems with a MPI Application.
I am sporadically receiving MPI errors of the form:
Fatal error in MPI_Allreduce: Message truncated, error stack:
MPI_Allreduce(1339)...............: MPI_Allreduce(sbuf=0x7ffa87ffcb98, rbuf=0x7ffa87ffcba8, count=2, MPI_DOUBLE, MPI_SUM, MPI_COMM_WORLD) failed
MPIR_Allreduce_impl(1180).........:
MPIR_Allreduce_intra(755).........:
MPIDI_CH3U_Receive_data_found(129): Message from rank 0 and tag 14 truncated; 384 bytes received but buffer size is 16
rank 1 in job 1 l1442_42561 caused collective abort of all ranks
exit status of rank 1: killed by signal 9
However I do not know at where to look. I know that the error is happening in an Allreduce function call however there are multiple ones.
How do I know which function call produces the error? Simple printf debugging does not help as the function could be called a million times before the error occurs the first time.
It might also not occur at all or immediately after the start of the program.
I have been able to track down the origin of the error by calling
MPI_Errhandler_set(MPI_COMM_WORLD, MPI_ERRORS_RETURN)
and then checking the return value of each of the Allreduce functions for not being equal to MPI_SUCCESS. This is a location where an error occurs

mpi_waitall in mpich2 with null values in array_of_requests

I get the following error with MPICH-2.1.5 and PGI compiler;
Fatal error in PMPI_Waitall: Invalid MPI_Request, error stack:
PMPI_Waitall(311): MPI_Waitall(count=4, req_array=0x2ca0ae0, status_array=0x2c8d220) failed
PMPI_Waitall(288): The supplied request in array element 0 was invalid (kind=0)
in the following example Fortran code for a stencil based algorithm,
Subroutine data_exchange
! data declaration
integer request(2*neighbor),status(MPI_STATUS_SIZE,2*neighbor)
integer n(neighbor),iflag(neighbor)
integer itag(neighbor),neigh(neighbor)
! Data initialization
request = 0; n = 0; iflag = 0;
! Create data buffers to send and recv
! Define values of n,iflag,itag,neigh based on boundary values
! Isend/Irecv look like this
ir=0
do i=1,neighbor
if(iflag(i).eq.1) then
ir=ir+1
call MPI_Isend(buf_send(i),n(i),MPI_REAL,neigh(i),itag(i),MPI_COMM_WORLD,request(ir),ierr)
ir=ir+1
call MPI_Irecv(buf_recv(i),nsize,MPI_REAL,neigh(i),MPI_ANY_TAG,MPI_COMM_WORLD,request(ir),ierr)
endif
enddo
! Calculations
call MPI_Waitall(2*neighbor,request,status,ierr)
end subroutine
The error occurs when the array_of_request in mpi_waitall gets a null value (request(i)=0). The null value in array_of_request comes up when the conditional iflag(i)=1 is not satisfied. The straight forward solution is to comment out the conditional but then that would introduce overheads of sending and receiving messages of 0 sizes which is not feasible for large scale systems (1000s of cores).
As per the MPI-forum link, the array_of_requests list may contain null or inactive handles.
I have tried following,
not initializing array_of_requests,
resizing array_of_request to match the MPI_isend + MPI_irecv count,
assigning dummy values to array_of_request
I also tested the very same code with MPICH-1 as wells as OpenMPI 1.4 and the code works without any issue.
Any insights would be really appreciated!
You could just move the first increment of ir into the conditional as well. Then you would have all handles in request(1:ir) at the and of the loop and issue:
call MPI_Waitall(ir,request(1:ir),status(:,1:ir),ierr)
This would make sure all requests are initialized properly.
Another thing: does n(i) in MPI_Isend hold the same value as nsize in the corresponding MPI_Irecv?
EDIT:
After consulting the MPI Standard (3.0, Ch. 3.7.3) I think you need to initialize the request array to MPI_REQUEST_NULL if you want give the whole request array to MPI_Waitall.

Trouble using MPI_BCAST with MPI_CART_CREATE

I am having trouble with MPI_BCAST in Fortran. I create a new communicator using MPI_CART_CREATE (say 'COMM_NEW'). When I broadcast data from root using old communicator (i.e. MPI_COMM_WORLD) it works fine. But, when i use new communicator that i just created it gives the error:
[compute-4-15.local:15298] *** An error occurred in MPI_Bcast
[compute-4-15.local:15298] *** on communicator MPI_COMM_WORLD
[compute-4-15.local:15298] *** MPI_ERR_COMM: invalid communicator
[compute-4-15.local:15298] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
It do get the result from the processors involved in COMM_NEW, and also the above error, think the problem is with other processors which are not included in COMM_NEW, but are present in MPI_COMM_WORLD. Any help will be greatly appreciated. Is it because the number of processors in COMM_NEW is less than total processors. If so how do i broadcast among a set of processors which are less than the total. Thanks.
My sample code is:
!PROGRAM TO BROADCAST THE DATA FROM ROOT TO DEST PROCESSORS
PROGRAM MAIN
IMPLICIT NONE
INCLUDE 'mpif.h'
!____________________________________________________________________________________
!-------------------------------DECLARE VARIABLES------------------------------------
INTEGER :: ERROR, RANK, NPROCS, I
INTEGER :: SOURCE, TAG, COUNT, NDIMS, COMM_NEW
INTEGER :: A(10), DIMS(1)
LOGICAL :: PERIODS(1), REORDER
!____________________________________________________________________________________
!-------------------------------DEFINE VARIABLES-------------------------------------
SOURCE = 0; TAG = 1; COUNT = 10
PERIODS(1) = .FALSE.
REORDER = .FALSE.
NDIMS = 1
DIMS(1) = 6
!____________________________________________________________________________________
!--------------------INITIALIZE MPI, DETERMINE SIZE AND RANK-------------------------
CALL MPI_INIT(ERROR)
CALL MPI_COMM_SIZE(MPI_COMM_WORLD, NPROCS, ERROR)
CALL MPI_COMM_RANK(MPI_COMM_WORLD, RANK, ERROR)
!
CALL MPI_CART_CREATE(MPI_COMM_WORLD, NDIMS, DIMS, PERIODS, REORDER, COMM_NEW, ERROR)
IF(RANK==SOURCE)THEN
DO I=1,10
A(I) = I
END DO
END IF
!____________________________________________________________________________________
!----------------BROADCAST VECTOR A FROM ROOT TO DESTINATIONS------------------------
CALL MPI_BCAST(A,10,MPI_INTEGER,SOURCE,COMM_NEW,ERROR)
!PRINT*, RANK
!WRITE(*, "(10I5)") A
CALL MPI_FINALIZE(ERROR)
END PROGRAM
I think the error you give at the top of your question doesn't match up with the code at the bottom since it's complaining about a Bcast on MPI_COMM_WORLD and you don't actually do one in your code.
Anyway, if you're running with more processes than dimensions, some of the processes won't be included in COMM_NEW. Instead, when the call to MPI_CART_CREATE returns, they'll get MPI_COMM_NULL for COMM_NEW instead of the new communicator with the topology. You just need to do a check to make sure you have a real communicator instead of MPI_COMM_NULL before doing the Bcast (or just have all of the ranks above DIMS(1) not enter the Bcast.
To elaborate on Wesley Bland's answer and to clarify the apparent discrepancy in the error message. When the number of MPI processes in MPI_COMM_WORLD is larger than the number of processes in the created Cartesian grid, some of the processes won't become members of the new Cartesian communicator and will get MPI_COMM_NULL -- the invalid communicator handle -- as a result. Calling a collective communication operation requires a valid inter- or intra-communicator handle. Unlike the allowed usage of MPI_PROC_NULL in point-to-point operations, using the invalid communicator handle in collective calls is erroneous. The last statement is not explicitly written in the MPI standard - instead, the language used is:
If comm is an intracommunicator, then ... If comm is an intercommunicator, then ...
Since MPI_COMM_NULL is neither an intra-, nor an inter-communicator, it doesn't fall in any of the two categories of defined behaviour and hence leads to an error condition.
Since communication errors have to occur in some context (i.e. in a valid communicator), Open MPI substitutes MPI_COMM_WORLD in the call to the error handler and hence the error message says "*** on communicator MPI_COMM_WORLD". This is the relevant code section from ompi/mpi/c/bcast.c, where MPI_Bcast is implemented:
if (ompi_comm_invalid(comm)) {
return OMPI_ERRHANDLER_INVOKE(MPI_COMM_WORLD, MPI_ERR_COMM,
FUNC_NAME);
}
...
if (MPI_IN_PLACE == buffer) {
return OMPI_ERRHANDLER_INVOKE(comm, MPI_ERR_ARG, FUNC_NAME);
}
Your code triggers the error handler inside the first check. In all other error checks comm is used instead (since it is determined to be a valid communicator handle) and the error message will state something like "*** on communicator MPI COMMUNICATOR 5 SPLIT FROM 0".