I bring my code below but when I run it, I encounter with this error:
aborting job:
Fatal error in MPI_Waitall: Invalid MPI_Request, error stack:
MPI_Wait(171): MPI_Waitall(count=4, req_array=0x000001400E0DE0,status_array = 0x0000000000012FD8C) failed
MPI_Waitall(96). : Invalid MPI_Request
MPI_Waitall(96). : Invalid MPI_Request
When I’m using a blocking send and receive, my code giving me a correct answer but when I’m using a non blocking send and receive I getting an error.
This is my code :
integer reqs(4) ! required variable for non-blocking calls
integer stats(MPI_status_SIZE,4) ! required variable for WAITALL routine
call MPI_INIT( ierr )
call MPI_COMM_RANK( MPI_COMM_WORLD, taskid, ierr )
call MPI_COMM_SIZE( MPI_COMM_WORLD, numtasks, ierr )
! Send data to the left neighbor
if ((taskid > 0) ) then
call MPI_ISEND(phi0(1,1),N_z,MPI_DOUBLE_PRECISION,taskid-1,11,MPI_COMM_WORLD,&
reqs(1),ierr)
end if
! Send data to the right neighbor
if (taskid < numtasks-1) then
call MPI_ISEND(phi0(1,cols),N_z,MPI_DOUBLE_PRECISION,taskid+1,10,MPI_COMM_WORLD,&
reqs(2),ierr)
end if
! Receives data from the neighbor on the left
if (taskid > 0) then
call MPI_IRECV(phi0(1,0),N_z,MPI_DOUBLE_PRECISION,taskid-1,10,MPI_COMM_WORLD,&
reqs(3),ierr)
end if
! From the right side of the neighbors to get the data
if (taskid < numtasks-1) then
call MPI_IRECV(phi0(1,cols+1),N_z,MPI_DOUBLE_PRECISION,taskid+1,11,MPI_COMM_WORLD,&
reqs(4),ierr)
end if
call MPI_WAITALL(4, reqs, status_array, ierr)
Related
I'm having trouble with my MPI_Isend and MPI_Irecv blocks of code. I need to send a number Cin to the next process up the line, then the current process can go about it's business.
The receiving process needs to receive before it can go further in it's calculations, but when I don't have MPI_Wait it nevers gets the data, and when I do it just hangs forever. What am I doing wrong?
Note: I only set Cin to 3 in order to see when the message doesn't go through. Currently it just hangs.
void ComputeS5C()
{
MPI_Request send_request, recv_request;
MPI_Status status;
int Cin[1] = {3};
if(my_rank == 0){Cin[0] = 0;}
else {
MPI_Irecv(Cin, 1, MPI_INT, my_rank - 1, 0, MPI_COMM_WORLD, &recv_request);
MPI_Wait(&recv_request, &status);
fprintf(stderr, "RANK:%d Message Received from rank%d: Cin=%d\n", my_rank, my_rank-1, Cin[0]);
}
int k;
for(k = 0; k < Size_5; k++)
{
int s5clast;
if(k==0)
{
s5clast = Cin[0];
}
else
{
s5clast = s5c[k-1];
}
s5c[k] = s5g[k] | (s5p[k]&s5clast);
}
//if not highest rank, pass the carryin upstream
if(my_rank < world_size - 1){
MPI_Isend(&s5c[k], 1, MPI_INT, my_rank+1, 1, MPI_COMM_WORLD, &send_request);
fprintf(stderr, "RANK:%d Message sent to rank%d: Cin=%d\n", my_rank, my_rank+1, s5c[k]);
}
MPI_Wait(&send_request, &status);
}
The error in your code has to do with the missmatch of tags. Messages are sent using a tag = 1 and received using tag = 0. Sends and receives are not matching explaining why all processes are stuck waiting that sent messages get consumed. Change the tags so that they match.
A note, when using MPI_Irecv you always need an MPI_Wait to be sure to know when it is safe to consume received data. I think in your example use of MPI_Recv is more approriate.
It seems that you communicate one rank after the other sequentially. Quite large overhead.
In socket programming, we have select() function which allows us to simultaneously check multiple sockets. I want to know is there any such feature available in MPI library as well?
In the first for loop of the following code, I am sending multiple nonblocking send and receive requests from one to every other node. In the second for loop instead of waiting for each node in sequential order, I want to start processing the data of the node which sends its data first. I want to know is there any way to do that?
for(id=0; id<numtasks; id++){
if(id == taskid) continue;
if(sendCount[id] != 0) MPI_Isend(sendBuffer[id], N*sendCount[id], MPI_DOUBLE, id, tag, MPI_COMM_WORLD, &reqs[id]);
if(recvCount[id] != 0) MPI_Irecv(recvBuffer[id], N*recvCount[id], MPI_DOUBLE, id, tag, MPI_COMM_WORLD, &reqs[id]);
}
for(id=0; id<numtasks; id++){
if(id == taskid) continue;
if(recvCount[id] != 0){
MPI_Wait(&reqs[id], &status);
for(i=0; i<recvCount[id]; i++)
splitData(N, recvBuffer[id] + N*i, U[toRecv[id][i]]);
}
}
According to the given answers, I have tried to modify my code but I am still getting segmentation fault error during run time. Please help me to figure out the error.
for(id=0; id<numtasks; id++){
if(id == taskid) continue;
if(sendCount[id] != 0) MPI_Isend(sendBuffer[id], N*sendCount[id], MPI_DOUBLE, id, tag, MPI_COMM_WORLD, &reqs[id]);
if(recvCount[id] != 0) MPI_Irecv(recvBuffer[id], N*recvCount[id], MPI_DOUBLE, id, tag, MPI_COMM_WORLD, &reqs[id]);
}
reqs[taskid] = reqs[numtasks-1];
for(i=0; i<numtasks-1; i++){
MPI_Waitany(numtasks-1, reqs, &id, &status);
if(id == taskid) id = numtasks-1;
for(i=0; i<recvCount[id]; i++)
splitData(N, recvBuffer[id] + N*i, U[toRecv[id][i]]);
}
The closest equivalent would be MPI_Waitsome, you provide a list of requests and it returns as soon as at least one request is completed. However, there is no timeout as in select. There is also MPI_Waitany, MPI_Waitall as well as MPI_Testany, MPI_Testall, MPI_Testsome.
The any and some variants mainly differ in the way the interface informs you about one or multiple completed requests.
Edit: You need to use a separate requests for each operation, specifically the send and receive operations.
is it possible to call the MPI_send and MPI_recv commands inside a subroutine and not the main program? I have written a minimal program for what I am trying to do. It is compiling fine but it is not working. The program just hangs in the "sendrecv" subroutine. Any ideas how can I do it?
main.f
program main
implicit none
include 'mpif.h'
integer me, np, ierror
call MPI_init( ierror )
call MPI_comm_rank( mpi_comm_world, me, ierror )
call MPI_comm_size( mpi_comm_world, np, ierror )
call sendrecv(me, np)
call mpi_finalize( ierror )
stop
end
sendrecv.f
subroutine sendrecv(me, np)
include 'mpif.h'
integer np, me, sender, tag
integer, dimension(mpi_status_size) :: status
integer, dimension(1) :: recv, send
if (me.eq.0) then
do sender = 1, np-1
call mpi_recv(recv, 1, mpi_integer, sender, tag,
& mpi_comm_world, status, ierror)
end do
end if
if ((me.ge.1).and.(me.lt.np)) then
send(1) = me*12
call mpi_send(send, 1, mpi_integer, 0, tag,
& mpi_comm_world, ierror)
end if
return
end
i have two questions ; the first one is :
i'm gonna use msmpi and i meant by "only mpi" that we mustn't use sockets, my application is about a scalable distributed data structure; initially, we have a server contain a file which has a variable size (the size could be increased by insertions and decreased by deletion) and when the size of the file exceed certain limit the file will be splitted, the half remain in the first server and the second half will be moved to a new server and so on... and the client need to be always informed by the address of the data he want to retrieve so he should have an image of the split operation of the file. finally, i hope i make it clearer.
and the second one is:
i've tried to compile simple client/server application(the code source is bellow) with msmpi or mpich2 and it doesn't work and gives me the error message "fatal error in mpi_open_port() and other errors of stack", so i installed open mpi on ubunto 11.10, and tried to run the same example it worked with server side and it gave me a port name but on the client side it gave me the error message:
[user-Compaq-610:03833] [[39604,1],0] ORTE_ERROR_LOG: Not found in file ../../../../../../ompi/mca/dpm/orte/dpm_orte.c at line 155
[user-Compaq-610:3833] *** An error occurred in MPI_Comm_connect
[user-Compaq-610:3833] *** on communicator MPI_COMM_WORLD
[user-Compaq-610:3833] *** MPI_ERR_INTERN: internal error
[user-Compaq-610:3833] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 3833 on
node toufik-Compaq-610 exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
so i'm confused what the problem is, and i spent a while trying to fix it,
i'd be greatfull if any body could help me with it, and thank u in advance.
the source code is here:
/* the server side */
#include <stdio.h>
#include <mpi.h>
main(int argc, char **argv)
{
int my_id;
char port_name[MPI_MAX_PORT_NAME];
MPI_Comm newcomm;
int passed_num;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &my_id);
passed_num = 111;
if (my_id == 0)
{
MPI_Open_port(MPI_INFO_NULL, port_name);
printf("%s\n\n", port_name); fflush(stdout);
} /* endif */
MPI_Comm_accept(port_name, MPI_INFO_NULL, 0, MPI_COMM_WORLD, &newcomm);
if (my_id == 0)
{
MPI_Send(&passed_num, 1, MPI_INT, 0, 0, newcomm);
printf("after sending passed_num %d\n", passed_num); fflush(stdout);
MPI_Close_port(port_name);
} /* endif */
MPI_Finalize();
exit(0);
} /* end main() */
and at the client side:
#include <stdio.h>
#include <mpi.h>
int main(int argc, char **argv)
{
int passed_num;
int my_id;
MPI_Comm newcomm;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &my_id);
MPI_Comm_connect(argv[1], MPI_INFO_NULL, 0, MPI_COMM_WORLD, &newcomm);
if (my_id == 0)
{
MPI_Status status;
MPI_Recv(&passed_num, 1, MPI_INT, 0, 0, newcomm, &status);
printf("after receiving passed_num %d\n", passed_num); fflush(stdout);
} /* endif */
MPI_Finalize();
return 0;
//exit(0);
} /* end main() */
How exactly do you run the application? It seems that provided client and server codes are the same.
Usually the code is the same for all MPI processes and program decides what to execute basing on rank as in this snippet if (my_id == 0) { ... }. The application is executed with mpiexec. For example mpiexec -n 2 ./application would run two MPI processes with ranks 1 and 2 in one MPI_COMM_WORLD communicator. Where exactly the prococesses would be executed (on the same node or on different ones) depends on configuration.
Nevertheless, you should create port with MPI_Open_port and the pass it to MPI_Comm_connect. Here is an example on how to use these functions: MPI_Comm_connect
Moreover, for MPI_Recv there must be corresponding MPI_Send. Otherwise receiving process would wait forever.
I had a problem with a program that uses MPI and I have just fixed it, however, I don't seem to understand what was wrong in the first place. I'm quite green with programming relates stuff, so please be forgiving.
The program is:
#include <iostream>
#include <cstdlib>
#include <mpi.h>
#define RNumber 3
using namespace std;
int main() {
/*Initiliaze MPI*/
int my_rank; //My process rank
int comm_sz; //Number of processes
MPI_Comm GathComm; //Communicator for MPI_Gather
MPI_Init(NULL, NULL);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
MPI_Comm_size(MPI_COMM_WORLD, &comm_sz);
/*Initialize an array for results*/
long rawT[RNumber];
long * Times = NULL; //Results from threads
if (my_rank == 0) Times = (long*) malloc(comm_sz*RNumber*sizeof(long));
/*Fill rawT with results at threads*/
for (int i = 0; i < RNumber; i++) {
rawT[i] = i;
}
if (my_rank == 0) {
/*Main thread recieves data from other threads*/
MPI_Gather(rawT, RNumber, MPI_LONG, Times, RNumber, MPI_LONG, 0, GathComm);
}
else {
/*Other threads send calculation results to main thread*/
MPI_Gather(rawT, RNumber, MPI_LONG, Times, RNumber, MPI_LONG, 0, GathComm);
}
/*Finalize MPI*/
MPI_Finalize();
return 0;
};
On execution the program returns the following message:
Fatal error in PMPI_Gather: Invalid communicator, error stack:
PMPI_Gather(863): MPI_Gather(sbuf=0xbf824b70, scount=3, MPI_LONG,
rbuf=0x98c55d8, rcount=3, MPI_LONG, root=0, comm=0xe61030) failed
PMPI_Gather(757): Invalid communicator Fatal error in PMPI_Gather:
Invalid communicator, error stack: PMPI_Gather(863):
MPI_Gather(sbuf=0xbf938960, scount=3, MPI_LONG, rbuf=(nil), rcount=3,
MPI_LONG, root=0, comm=0xa6e030) failed PMPI_Gather(757): Invalid
communicator
After I remove GathComm altogether and substitute it with MPI_COMM_WORLD default communicator everything works fine.
Could anyone be so kind to explain what was I doing wrong and how did this adjustment made everything work?
That's because GathComm has not been assigned a valid communicator. "MPI_Comm GathComm;" only declares the variable to hold a communicator but doesn't create one.
You can use the default communicator (MPI_COMM_WORLD) if you simply want to include all procs in the operation.
Custom communicators are useful when you want to organised your procs in separate groups or when using virtual communication topologies.
To find out more, check out this article which describes Groups, Communicator and Topologies.