I am trying to write a 3D parallel computing Poisson solver using OpenMPI ver 1.6.4.
The following parts are my code for parallel computing using blocking send receive.
The following variable is declared in another file.
int px = lx*meshx; //which is meshing point in x axis.
int py = ly*meshy;
int pz = lz*meshz;
int L = px * py * pz
The following code works well while
lx=ly=lz=10;
meshx=meshy=2, meshz=any int number.
The send recv parts failed while meshx and meshy are larger than 4.
The program hanging there waiting for sending or receiving data.
But it works if I only send data from one processor to another, not exchange the data.
(ie : send from rank 0 to 1, but dont send from 1 to 0)
I can't understand how this code works while meshx and meshy is small but failed while mesh number x y in large.
Does blocking send receive process will interrupt itself or I confuse the processor in my code?Does it matter with my array size?
#include "MPI-practice.h"
# include <iostream>
# include <math.h>
# include <string.h>
# include <time.h>
# include <sstream>
# include <string>
# include "mpi.h"
using namespace std;
extern int px,py,pz;
extern int L;
extern double simTOL_phi;
extern vector<double> phi;
int main(int argc, char *argv[]){
int numtasks, taskid, offset_A, offset_B, DD_loop,s,e;
double errPhi(0),errPhi_sum(0);
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &numtasks);
MPI_Comm_rank(MPI_COMM_WORLD, &taskid);
MPI_Status status;
if((pz-1)%numtasks!=0){
//cerr << "can not properly divide meshing points."<<endl;
exit(0);
}
offset_A=(pz-1)/numtasks*px*py;
offset_B=((pz-1)/numtasks+1)*px*py;
s=offset_A*taskid;
e=offset_A*taskid+offset_B;
int pz_offset_A=(pz-1)/numtasks;
int pz_offset_B=(pz-1)/numtasks+1;
stringstream name1;
string name2;
Setup_structure();
Initialize();
Build_structure();
if (taskid==0){
//master processor
ofstream output;
output.open("time", fstream::out | fstream::app);
output.precision(6);
clock_t start,end;
start=clock();
do{
errPhi_sum=0;
errPhi=Poisson_inner(taskid,numtasks,pz_offset_A,pz_offset_B);
//Right exchange
MPI_Send(&phi[e-px*py], px*py, MPI_DOUBLE, taskid+1, 1, MPI_COMM_WORLD);
MPI_Recv(&phi[e], px*py, MPI_DOUBLE, taskid+1, 1, MPI_COMM_WORLD, &status);
MPI_Allreduce ( &errPhi, &errPhi_sum, 1, MPI_DOUBLE, MPI_SUM, MPI_COMM_WORLD );
}while(errPhi_sum>simTOL_phi);
end=clock();
output << "task "<< 0 <<" = "<< (end-start)/CLOCKS_PER_SEC <<endl<<endl;
Print_to_file("0.txt");
//recv from slave
for (int i=1;i<numtasks;i++){
MPI_Recv(&phi[offset_A*i], offset_B, MPI_DOUBLE, i, 1, MPI_COMM_WORLD, &status);
}
Print_to_file("sum.txt");
}
else{
//slave processor
do{
errPhi=Poisson_inner(taskid,numtasks,pz_offset_A,pz_offset_B);
//Left exchange
MPI_Send(&phi[s+px*py], px*py, MPI_DOUBLE, taskid-1, 1, MPI_COMM_WORLD);
MPI_Recv(&phi[s], px*py, MPI_DOUBLE, taskid-1, 1, MPI_COMM_WORLD, &status);
//Right exchange
if(taskid!=numtasks-1){
MPI_Send(&phi[e-px*py], px*py, MPI_DOUBLE, taskid+1, 1, MPI_COMM_WORLD);
MPI_Recv(&phi[e], px*py, MPI_DOUBLE, taskid+1, 1, MPI_COMM_WORLD, &status);
}
MPI_Allreduce ( &errPhi, &errPhi_sum, 1, MPI_DOUBLE, MPI_SUM, MPI_COMM_WORLD );
}while(errPhi_sum>simTOL_phi);
//send back master
MPI_Send(&phi[s], offset_B, MPI_DOUBLE, 0, 1, MPI_COMM_WORLD);
name1<<taskid<<".txt";
name2=name1.str();
Print_to_file(name2.c_str());
}
MPI_Finalize();
}
Replace all coupled MPI_Send/MPI_Recv calls with a calls to MPI_Sendrecv. For example, this
MPI_Send(&phi[e-px*py], px*py, MPI_DOUBLE, taskid+1, 1, MPI_COMM_WORLD);
MPI_Recv(&phi[e], px*py, MPI_DOUBLE, taskid+1, 1, MPI_COMM_WORLD, &status);
becomes
MPI_Sendrecv(&phi[e-px*py], px*py, MPI_DOUBLE, taskid+1, 1,
&phi[e], px*px, MPI_DOUBLE, taskid+1, 1,
MPI_COMM_WORLD, &status);
MPI_Sendrecv uses non-blocking operations internally and thus it does not deadlock, even if two ranks are sending to each other at the same time. The only requirement (as usual) is that each send is matched by a receive.
The problem is in your inner most loop. Both tasks do a blocking send at the same time, which then hangs. It doesn't hang with smaller data sets, as the MPI library has a big enough buffer to hold the data. But once you increase that beyond the buffer size, the send blocks both processes. Since neither process are trying to receive, neither buffer can empty and the program deadlocks.
To fix it, have the slave first receive from the master, then send data back. If your send/receive don't conflict, you can switch the order of the functions. Otherwise you need to create a temporary buffer to hold it.
Related
int myrank,
numprocs;
double mytime, /*variables used for gathering timing statistics*/
maxtime,
mintime,
avgtime;
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
MPI_Barrier(MPI_COMM_WORLD); /*synchronize all processes*/
mytime = MPI_Wtime(); /*get time just before work section */
work();
mytime = MPI_Wtime() - mytime; /*get time just after work section*/
/*compute max, min, and average timing statistics*/
MPI_Reduce(&mytime, &maxtime, 1, MPI_DOUBLE, MPI_MAX, 0, MPI_COMM_WORLD);
MPI_Reduce(&mytime, &mintime, 1, MPI_DOUBLE, MPI_MIN, 0, MPI_COMM_WORLD);
MPI_Reduce(&mytime, &avgtime, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD);
if (myrank == 0) {
avgtime /= numprocs;
printf("Min: %lf Max: %lf Avg: %lf\n", mintime, maxtime, avgtime);
}
In here I'm always getting as
Severity Code Description Project File Line Suppression State
Error C3861 'work': identifier not found Activity1 C:\Users\Acer\Desktop\Self Study\MPI\Activity1\Activity1.cpp 39
this is a code I got from a tutorial, So I do not have a good knowledge of c++. So please help me in this case. I'm tried this many times.
As this is a code segment from the MPI reduce example code from a tutorial, the work() function needs to be added to the code as follows to take the time of execution. Here we are using a dummy function to stimulate the workload.
As I know work() is not an inbuilt function of c++.
void work(){
for(int i =0; i< INT16_MAX; i++) {
//simulate the workload
}
}
I have an array of index which I want each worker do something based on these indexes.
the size of the array might be more than the total number of ranks, so my first question is if there is another way except master-worker load balancing here? I want to have a balances system and also I want to assign each index to each ranks.
I was thinking about master-worker, and in this approach master rank (0) is giving each index to other ranks. but when I was running my code with 3 rank and 15 index my code is halting in while loop for sending the index 4. I was wondering If anybody can help me to find the problem
if(pCurrentID == 0) { // Master
MPI_Status status;
int nindices = 15;
int mesg[1] = {0};
int initial_id = 0;
int recv_mesg[1] = {0};
// -- send out initial ids to workers --//
while (initial_id < size - 1) {
if (initial_id < nindices) {
MPI_Send(mesg, 1, MPI_INT, initial_id + 1, 1, MPI_COMM_WORLD);
mesg[0] += 1;
++initial_id;
}
}
//-- hand out id to workers dynamically --//
while (mesg[0] != nindices) {
MPI_Probe(MPI_ANY_SOURCE, 1, MPI_COMM_WORLD, &status);
int isource = status.MPI_SOURCE;
MPI_Recv(recv_mesg, 1, MPI_INT, isource, 1, MPI_COMM_WORLD, &status);
MPI_Send(mesg, 1, MPI_INT, isource, 1, MPI_COMM_WORLD);
mesg[0] += 1;
}
//-- hand out ending signals once done --//
for (int rank = 1; rank < size; ++rank) {
mesg[0] = -1;
MPI_Send(mesg, 1, MPI_INT, rank, 0, MPI_COMM_WORLD);
}
} else {
MPI_Status status;
int id[1] = {0};
// Get the surrounding fragment id
MPI_Probe(MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
int itag = status.MPI_TAG;
MPI_Recv(id, 1, MPI_INT, 0, itag, MPI_COMM_WORLD, &status);
int jfrag = id[0];
if (jfrag < 0) break;
// do something
MPI_Send(id, 1, MPI_INT, 0, 1, MPI_COMM_WORLD);
}
I have an array of index which I want each worker do something based
on these indexes. the size of the array might be more than the total
number of ranks, so my first question is if there is another way
except master-worker load balancing here? I want to have a balances
system and also I want to assign each index to each ranks.
No, but if the work performed per array index takes roughly the same amount of time, you can simply scatter the array among the processes.
I was thinking about master-worker, and in this approach master rank
(0) is giving each index to other ranks. but when I was running my
code with 3 rank and 15 index my code is halting in while loop for
sending the index 4. I was wondering If anybody can help me to find
the problem
As already pointed out in the comments, the problem is that you are missing (in the workers side) the loop of querying the master for work.
The load-balancer can be implemented as follows:
The master initial sends an iteration to the other workers;
Each worker waits for a message from the master;
Afterwards the master calls MPI_Recv from MPI_ANY_SOURCE and waits for another worker to request work;
After the worker finished working on its first iteration it sends its rank to the master, signaling the master to send a new iteration;
The master reads the rank sent by the worker in step 4., checks the array for a new index, and if there is still a valid index, send it to the worker. Otherwise, sends a special message signaling the worker that there is no more work to be performed. That message can be for instance -1;
When the worker receive the special message it stops working;
The master stops working when all the workers have receive the special message.
An example of such approach:
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
int main(int argc,char *argv[]){
MPI_Init(NULL,NULL); // Initialize the MPI environment
int rank;
int size;
MPI_Status status;
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
MPI_Comm_size(MPI_COMM_WORLD,&size);
int work_is_done = -1;
if(rank == 0){
int max_index = 10;
int index_simulator = 0;
// Send statically the first iterations
for(int i = 1; i < size; i++){
MPI_Send(&index_simulator, 1, MPI_INT, i, i, MPI_COMM_WORLD);
index_simulator++;
}
int processes_finishing_work = 0;
do{
int process_that_wants_work = 0;
MPI_Recv(&process_that_wants_work, 1, MPI_INT, MPI_ANY_SOURCE, 1, MPI_COMM_WORLD, &status);
if(index_simulator < max_index){
MPI_Send(&index_simulator, 1, MPI_INT, process_that_wants_work, 1, MPI_COMM_WORLD);
index_simulator++;
}
else{ // send special message
MPI_Send(&work_is_done, 1, MPI_INT, process_that_wants_work, 1, MPI_COMM_WORLD);
processes_finishing_work++;
}
} while(processes_finishing_work < size - 1);
}
else{
int index_to_work = 0;
MPI_Recv(&index_to_work, 1, MPI_INT, 0, rank, MPI_COMM_WORLD, &status);
// Work with the iterations index_to_work
do{
MPI_Send(&rank, 1, MPI_INT, 0, 1, MPI_COMM_WORLD);
MPI_Recv(&index_to_work, 1, MPI_INT, 0, 1, MPI_COMM_WORLD, &status);
if(index_to_work != work_is_done)
// Work with the iterations index_to_work
}while(index_to_work != work_is_done);
}
printf("Process {%d} -> I AM OUT\n", rank);
MPI_Finalize();
return 0;
}
You can improve upon the aforementioned approach by reducing: 1) the number of messages sent and 2) the time waiting for them. For the former you can try to use a chunking strategy (i.e., sending more than one index per MPI communication). For the latter you can try to play around with nonblocking MPI communications or have two threads per process one to receive/send the work another to actually perform the work. This multithreading approach would also allow the master process to actually work with the array indices, but it significantly complicates the approach.
Does this example contradict the manual? The manual states that both the array of requests and the array of statuses must be of the same size. To be more precise, both arrays should be at least as long as it indicated by the count argument. Yet in the below example status array size is 2, not 4. Also, the example contradicts this statement from the manual
The error-free execution of MPI_Waitall(count, array_of_requests,
array_of_statuses) has the same effect as the execution of
MPI_Wait(&array_of_request[i], &array_of_statuses[i]), for
i=0,...,count-1, in some arbitrary order.
#include "mpi.h"
#include <stdio.h>
int main(argc,argv)
int argc;
char *argv[]; {
int numtasks, rank, next, prev, buf[2], tag1=1, tag2=2;
MPI_Request reqs[4];
MPI_Status stats[2];
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD, &numtasks);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
prev = rank-1;
next = rank+1;
if (rank == 0) prev = numtasks - 1;
if (rank == (numtasks - 1)) next = 0;
MPI_Irecv(&buf[0], 1, MPI_INT, prev, tag1, MPI_COMM_WORLD, &reqs[0]);
MPI_Irecv(&buf[1], 1, MPI_INT, next, tag2, MPI_COMM_WORLD, &reqs[1]);
MPI_Isend(&rank, 1, MPI_INT, prev, tag2, MPI_COMM_WORLD, &reqs[2]);
MPI_Isend(&rank, 1, MPI_INT, next, tag1, MPI_COMM_WORLD, &reqs[3]);
{ do some work }
MPI_Waitall(4, reqs, stats);
MPI_Finalize();
}
P.S. Definition of main looks strange. The return value is missing. Is it prehistoric C or typo?
Yes, this example contradicts the manual. If you compare the example with the Fortran version, you'll see that the Fortran version is correct in that the status array is large enough (strangely enough, it's a 2D array but thanks to implicit interfaces and storage association it can be seen as a 1D array of size MPI_STATUS_SIZE * 2 which is larger than 4 provided MPI_STATUS_SIZE is larger than 1 (on my system it's 5).
And yes, the missing return statement is an error; however some compilers resort to just emitting a warning for omitting the return statement in main(). Also, the prehistoricity of the code can be seen in the K&R style declaration of the arguments.
I have a total of 8 messages being passed on 4 nodes using MPI. I noticed that there were two messages whose arrays did not provide meaningful results. I have copied an excerpt of the code below? These are some related questions I had based on the code/results below:
Does the MPI_Isend also require a wait? I am not sure if there is a deadlock. I also tried just passing these two variables from one node to the other, and the array values were still NULL.
Will MPI_SendRecv improve the efficiency of the code as suggested here Non Blocking communication in MPI and MPI Wait Issue. Not all information is passed correctly? If so, how/why? Would also appreciate some pointers on setting that up.
Thanks!
Source Code:
if ((my_rank) == 0)
{
MPI_Irecv(A, Rows, MPI_DOUBLE, my_rank+1, MPI_ANY_TAG, MPI_COMM_WORLD, &request[6]);
MPI_Wait(&request[6], &status[6]);
}
if ((my_rank) == 1)
{
MPI_Isend(AA, Rows, MPI_DOUBLE, my_rank-1, 0, MPI_COMM_WORLD, &request[6]);
}
if ((my_rank) == 2)
{
MPI_Isend(B, Rows, MPI_DOUBLE, my_rank+1, 0, MPI_COMM_WORLD, &request[7]);
}
if ((my_rank) == 3)
{
MPI_Irecv(BB, Rows, MPI_DOUBLE, my_rank-1, MPI_ANY_TAG, MPI_COMM_WORLD, &request[7]);
MPI_Wait(&request[7], &status[7]);
}
Yes, All non-blocking calls (MPI_Isend, MPI_Irecv etc) require a matching MPI_Wait. The call is not guaranteed to complete until MPI_Wait is called. You should not change the contents of the buffer until after MPI_Wait returns.
https://computing.llnl.gov/tutorials/mpi/
To use SendRecv, same task has to send a message and wait to receive a message. That pattern doesnt hold true for your code.
I noticed that not all my MPI_Isend/MPI_IRecv were being executed. I think it may perhaps be either the order in which I do my send and receive or the fact that the code doesn't wait until all the commands are executed. I have copied the excerpt from the code below. Could you suggest as to what I could be doing incorrectly?
Thanks!
MPI_Status status[8];
MPI_Request request[8];
....
....
if ((my_rank) == 0)
{
MPI_Isend(eastedge0, Rows, MPI_DOUBLE, my_rank+1, 0, MPI_COMM_WORLD, &request[0]);
MPI_Irecv(westofwestedge0, Rows, MPI_DOUBLE, my_rank+1, MPI_ANY_TAG, MPI_COMM_WORLD, &request[6]);
MPI_Wait(&request[6], &status[6]);
}
if ((my_rank) == 1)
{
MPI_Irecv(eastofeastedge1, Rows, MPI_DOUBLE, my_rank-1, MPI_ANY_TAG, MPI_COMM_WORLD, &request[0]);
MPI_Wait(&request[0], &status[0]);
MPI_Isend(westedge1, Rows, MPI_DOUBLE, my_rank-1, 0, MPI_COMM_WORLD, &request[6]);
}
Either rank 0 or 1 could still be sending data after this block of code has been executed (as you don't wait on the send request object). This could cause problems if you modify the data before it has finished sending.
For this particular example, perhaps MPI_Sendrecv would be useful?
For every call to a non-blocking MPI call, there has to be a corresponding wait. You are missing one wait per process.