I have a Finite Element code that uses blocking receives and non-blocking sends. Each element has 3 incoming faces and 3 outgoing faces. The mesh is split up among many processors, so sometimes the boundary conditions come from the elements processor, or from neighboring processors. Relevant parts of the code are:
std::vector<task>::iterator it = All_Tasks.begin();
std::vector<task>::iterator it_end = All_Tasks.end();
int task = 0;
for (; it != it_end; it++, task++)
{
for (int f = 0; f < 3; f++)
{
// Get the neighbors for each incoming face
Neighbor neighbor = subdomain.CellSets[(*it).cellset_id_loc].neighbors[incoming[f]];
// Get buffers from boundary conditions or neighbor processors
if (neighbor.processor == rank)
{
subdomain.Set_buffer_from_bc(incoming[f]);
}
else
{
// Get the flag from the corresponding send
target = GetTarget((*it).angle_id, (*it).group_id, (*it).cell_id);
if (incoming[f] == x)
{
int size = cells_y*cells_z*groups*angles*4;
MPI_Status status;
MPI_Recv(&subdomain.X_buffer[0], size, MPI_DOUBLE, neighbor.processor, target, MPI_COMM_WORLD, &status);
}
if (incoming[f] == y)
{
int size = cells_x*cells_z*groups*angles * 4;
MPI_Status status;
MPI_Recv(&subdomain.Y_buffer[0], size, MPI_DOUBLE, neighbor.processor, target, MPI_COMM_WORLD, &status);
}
if (incoming[f] == z)
{
int size = cells_x*cells_y*groups*angles * 4;
MPI_Status status;
MPI_Recv(&subdomain.Z_buffer[0], size, MPI_DOUBLE, neighbor.processor, target, MPI_COMM_WORLD, &status);
}
}
}
... computation ...
for (int f = 0; f < 3; f++)
{
// Get the outgoing neighbors for each face
Neighbor neighbor = subdomain.CellSets[(*it).cellset_id_loc].neighbors[outgoing[f]];
if (neighbor.IsOnBoundary)
{
// store the buffer into the boundary information
}
else
{
target = GetTarget((*it).angle_id, (*it).group_id, neighbor.cell_id);
if (outgoing[f] == x)
{
int size = cells_y*cells_z*groups*angles * 4;
MPI_Request request;
MPI_Isend(&subdomain.X_buffer[0], size, MPI_DOUBLE, neighbor.processor, target, MPI_COMM_WORLD, &request);
}
if (outgoing[f] == y)
{
int size = cells_x*cells_z*groups*angles * 4;
MPI_Request request;
MPI_Isend(&subdomain.Y_buffer[0], size, MPI_DOUBLE, neighbor.processor, target, MPI_COMM_WORLD, &request);
}
if (outgoing[f] == z)
{
int size = cells_x*cells_y*groups*angles * 4;
MPI_Request request;
MPI_Isend(&subdomain.Z_buffer[0], size, MPI_DOUBLE, neighbor.processor, target, MPI_COMM_WORLD, &request);
}
}
}
}
A processor can do a lot of tasks before it needs information from other processors. I need a non-blocking send so that the code can keep working, but I'm pretty sure the receives are overwriting the send buffers before they get sent.
I've tried timing this code, and it's taking 5-6 seconds for the call to MPI_Recv, even though the message it's trying to receive has been sent. My theory is that the Isend is starting, but not actually sending anything until the Recv is called. The message itself is on the order of 1 MB. I've looked at benchmarks and messages of this size should take a very small fraction of a second to send.
My question is, in this code, is the buffer that was sent being overwritten, or just the local copy? Is there a way to 'add' to a buffer when I'm sending, rather than writing to the same memory location? I want the Isend to write to a different buffer every time it's called so the information isn't being overwritten while the messages wait to be received.
** EDIT **
A related question that might fix my problem: Can MPI_Test or MPI_Wait give information about an MPI_Isend writing to a buffer, i.e. return true if the Isend has written to the buffer, but that buffer has yet to be received?
** EDIT 2 **
I have added more information about my problem.
So it looks like I just have to bite the bullet and allocate enough memory in the send buffers to accommodate all the messages, and then just send portions of the buffer when I send.
Related
I have an array of index which I want each worker do something based on these indexes.
the size of the array might be more than the total number of ranks, so my first question is if there is another way except master-worker load balancing here? I want to have a balances system and also I want to assign each index to each ranks.
I was thinking about master-worker, and in this approach master rank (0) is giving each index to other ranks. but when I was running my code with 3 rank and 15 index my code is halting in while loop for sending the index 4. I was wondering If anybody can help me to find the problem
if(pCurrentID == 0) { // Master
MPI_Status status;
int nindices = 15;
int mesg[1] = {0};
int initial_id = 0;
int recv_mesg[1] = {0};
// -- send out initial ids to workers --//
while (initial_id < size - 1) {
if (initial_id < nindices) {
MPI_Send(mesg, 1, MPI_INT, initial_id + 1, 1, MPI_COMM_WORLD);
mesg[0] += 1;
++initial_id;
}
}
//-- hand out id to workers dynamically --//
while (mesg[0] != nindices) {
MPI_Probe(MPI_ANY_SOURCE, 1, MPI_COMM_WORLD, &status);
int isource = status.MPI_SOURCE;
MPI_Recv(recv_mesg, 1, MPI_INT, isource, 1, MPI_COMM_WORLD, &status);
MPI_Send(mesg, 1, MPI_INT, isource, 1, MPI_COMM_WORLD);
mesg[0] += 1;
}
//-- hand out ending signals once done --//
for (int rank = 1; rank < size; ++rank) {
mesg[0] = -1;
MPI_Send(mesg, 1, MPI_INT, rank, 0, MPI_COMM_WORLD);
}
} else {
MPI_Status status;
int id[1] = {0};
// Get the surrounding fragment id
MPI_Probe(MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
int itag = status.MPI_TAG;
MPI_Recv(id, 1, MPI_INT, 0, itag, MPI_COMM_WORLD, &status);
int jfrag = id[0];
if (jfrag < 0) break;
// do something
MPI_Send(id, 1, MPI_INT, 0, 1, MPI_COMM_WORLD);
}
I have an array of index which I want each worker do something based
on these indexes. the size of the array might be more than the total
number of ranks, so my first question is if there is another way
except master-worker load balancing here? I want to have a balances
system and also I want to assign each index to each ranks.
No, but if the work performed per array index takes roughly the same amount of time, you can simply scatter the array among the processes.
I was thinking about master-worker, and in this approach master rank
(0) is giving each index to other ranks. but when I was running my
code with 3 rank and 15 index my code is halting in while loop for
sending the index 4. I was wondering If anybody can help me to find
the problem
As already pointed out in the comments, the problem is that you are missing (in the workers side) the loop of querying the master for work.
The load-balancer can be implemented as follows:
The master initial sends an iteration to the other workers;
Each worker waits for a message from the master;
Afterwards the master calls MPI_Recv from MPI_ANY_SOURCE and waits for another worker to request work;
After the worker finished working on its first iteration it sends its rank to the master, signaling the master to send a new iteration;
The master reads the rank sent by the worker in step 4., checks the array for a new index, and if there is still a valid index, send it to the worker. Otherwise, sends a special message signaling the worker that there is no more work to be performed. That message can be for instance -1;
When the worker receive the special message it stops working;
The master stops working when all the workers have receive the special message.
An example of such approach:
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
int main(int argc,char *argv[]){
MPI_Init(NULL,NULL); // Initialize the MPI environment
int rank;
int size;
MPI_Status status;
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
MPI_Comm_size(MPI_COMM_WORLD,&size);
int work_is_done = -1;
if(rank == 0){
int max_index = 10;
int index_simulator = 0;
// Send statically the first iterations
for(int i = 1; i < size; i++){
MPI_Send(&index_simulator, 1, MPI_INT, i, i, MPI_COMM_WORLD);
index_simulator++;
}
int processes_finishing_work = 0;
do{
int process_that_wants_work = 0;
MPI_Recv(&process_that_wants_work, 1, MPI_INT, MPI_ANY_SOURCE, 1, MPI_COMM_WORLD, &status);
if(index_simulator < max_index){
MPI_Send(&index_simulator, 1, MPI_INT, process_that_wants_work, 1, MPI_COMM_WORLD);
index_simulator++;
}
else{ // send special message
MPI_Send(&work_is_done, 1, MPI_INT, process_that_wants_work, 1, MPI_COMM_WORLD);
processes_finishing_work++;
}
} while(processes_finishing_work < size - 1);
}
else{
int index_to_work = 0;
MPI_Recv(&index_to_work, 1, MPI_INT, 0, rank, MPI_COMM_WORLD, &status);
// Work with the iterations index_to_work
do{
MPI_Send(&rank, 1, MPI_INT, 0, 1, MPI_COMM_WORLD);
MPI_Recv(&index_to_work, 1, MPI_INT, 0, 1, MPI_COMM_WORLD, &status);
if(index_to_work != work_is_done)
// Work with the iterations index_to_work
}while(index_to_work != work_is_done);
}
printf("Process {%d} -> I AM OUT\n", rank);
MPI_Finalize();
return 0;
}
You can improve upon the aforementioned approach by reducing: 1) the number of messages sent and 2) the time waiting for them. For the former you can try to use a chunking strategy (i.e., sending more than one index per MPI communication). For the latter you can try to play around with nonblocking MPI communications or have two threads per process one to receive/send the work another to actually perform the work. This multithreading approach would also allow the master process to actually work with the array indices, but it significantly complicates the approach.
I am implementing MPI non-blocking communication inside my program. I see on MPI_Isend man_page, it says:
A nonblocking send call indicates that the system may start copying data out of the send buffer. The sender should not modify any part of the send buffer after a nonblocking send operation is called, until the send completes.
My code works like this:
// send messages
if(s > 0){
MPI_Requests s_requests[s];
MPI_Status s_status[s];
for(int i = 0; i < s; ++i){
// some code to form the message to send
std::vector<doubel> send_info;
// non-blocking send
MPI_Isend(&send_info[0], ..., s_requests[i]);
}
MPI_Waitall(s, s_requests, s_status);
}
// recv info
if(n > 0){ // s and n will match
for(int i = 0; i < n; ++i){
MPI_Status status;
// allocate the space to recv info
std::vector<double> recv_info;
MPI_Recv(&recv_info[0], ..., status)
}
}
My question is: am I modify the send buffers since they are in the inner curly brackets (the send_info vector get killed after the loop finishes)? Therefore, this is not a safe communication mode? Although my program works fine now, I still being suspected. Thank you for your reply.
There are two points I want to emphasize in this example.
The first one is the problem I questioned: send buffer gets modified before MPI_Waitall. The reason is what Gilles said. And the solution could be allocated a big buffer before the for loop, and use MPI_Waitall after the loop is finished or put MPI_Wait inside the loop. But the latter one is equivalent to use MPI_Send in the sense of performance.
However, I found if you simply transfer to blocking send and receive, a communication scheme like this could cause deadlock. It is similar to the classic deadlock:
if (rank == 0) {
MPI_Send(..., 1, tag, MPI_COMM_WORLD);
MPI_Recv(..., 1, tag, MPI_COMM_WORLD, &status);
} else if (rank == 1) {
MPI_Send(..., 0, tag, MPI_COMM_WORLD);
MPI_Recv(..., 0, tag, MPI_COMM_WORLD, &status);
}
And the explaination could be found here.
My program could cause a similar situation: all the processors called MPI_Send then it is a deadlock.
So my solution is to use a large buffer and stick to non-blocking communication scheme.
#include <vector>
#include <unordered_map>
// send messages
if(s > 0){
MPI_Requests s_requests[s];
MPI_Status s_status[s];
std::unordered_map<int, std::vector<double>> send_info;
for(int i = 0; i < s; ++i){
// some code to form the message to send
send_info[i] = std::vector<double> ();
// non-blocking send
MPI_Isend(&send_info[i][0], ..., s_requests[i]);
}
MPI_Waitall(s, s_requests, s_status);
}
// recv info
if(n > 0){ // s and n will match
for(int i = 0; i < n; ++i){
MPI_Status status;
// allocate the space to recv info
std::vector<double> recv_info;
MPI_Recv(&recv_info[0], ..., status)
}
}
I am trying to send message to all MPI processes from a process and also receive message from all those processes in a process. It is basically an all to all communication where every process sends message to every other process (except itself) and receives message from every other process.
The following example code snippet shows what I am trying to achieve. Now, the problem with MPI_Send is its behavior where for small message size it acts as non-blocking but for the larger message (in my machine BUFFER_SIZE 16400) it blocks. I am aware of this is how MPI_Send behaves. As a workaround, I replaced the code below with blocking (send+recv) which is MPI_Sendrecv. Example code is like this MPI_Sendrecv(intSendPack, BUFFER_SIZE, MPI_INT, processId, MPI_TAG, intReceivePack, BUFFER_SIZE, MPI_INT, processId, MPI_TAG, MPI_COMM_WORLD, MPI_STATUSES_IGNORE) . I am making the above call for all the processes of MPI_COMM_WORLD inside a loop for every rank and this approach gives me what I am trying to achieve (all to all communication). However, this call takes a lot of time which I want to cut-down with some time-efficient approach. I have tried with mpi scatter and gather to perform all to all communication but here one issue is the buffer size (16400) may differ in actual implementation in different iteration for MPI_all_to_all function calling. Here, I am using MPI_TAG to differentiate the call in different iteration which I cannot use in scatter and gather functions.
#define BUFFER_SIZE 16400
void MPI_all_to_all(int MPI_TAG)
{
int size;
int rank;
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
int* intSendPack = new int[BUFFER_SIZE]();
int* intReceivePack = new int[BUFFER_SIZE]();
for (int prId = 0; prId < size; prId++) {
if (prId != rank) {
MPI_Send(intSendPack, BUFFER_SIZE, MPI_INT, prId, MPI_TAG,
MPI_COMM_WORLD);
}
}
for (int sId = 0; sId < size; sId++) {
if (sId != rank) {
MPI_Recv(intReceivePack, BUFFER_SIZE, MPI_INT, sId, MPI_TAG,
MPI_COMM_WORLD, MPI_STATUSES_IGNORE);
}
}
}
I want to know if there is a way I can perform all to all communication using any efficient communication model. I am not sticking to MPI_Send, if there is some other way which provides me what I am trying to achieve, I am happy with that. Any help or suggestion is much appreciated.
This is a benchmark that allows to compare performance of collective vs. point-to-point communication in an all-to-all communication,
#include <iostream>
#include <algorithm>
#include <mpi.h>
#define BUFFER_SIZE 16384
void point2point(int*, int*, int, int);
int main(int argc, char *argv[])
{
MPI_Init(&argc, &argv);
int rank_id = 0, com_sz = 0;
double t0 = 0.0, tf = 0.0;
MPI_Comm_size(MPI_COMM_WORLD, &com_sz);
MPI_Comm_rank(MPI_COMM_WORLD, &rank_id);
int* intSendPack = new int[BUFFER_SIZE]();
int* result = new int[BUFFER_SIZE*com_sz]();
std::fill(intSendPack, intSendPack + BUFFER_SIZE, rank_id);
std::fill(result + BUFFER_SIZE*rank_id, result + BUFFER_SIZE*(rank_id+1), rank_id);
// Send-Receive
t0 = MPI_Wtime();
point2point(intSendPack, result, rank_id, com_sz);
MPI_Barrier(MPI_COMM_WORLD);
tf = MPI_Wtime();
if (!rank_id)
std::cout << "Send-receive time: " << tf - t0 << std::endl;
// Collective
std::fill(result, result + BUFFER_SIZE*com_sz, 0);
std::fill(result + BUFFER_SIZE*rank_id, result + BUFFER_SIZE*(rank_id+1), rank_id);
t0 = MPI_Wtime();
MPI_Allgather(intSendPack, BUFFER_SIZE, MPI_INT, result, BUFFER_SIZE, MPI_INT, MPI_COMM_WORLD);
MPI_Barrier(MPI_COMM_WORLD);
tf = MPI_Wtime();
if (!rank_id)
std::cout << "Allgather time: " << tf - t0 << std::endl;
MPI_Finalize();
delete[] intSendPack;
delete[] result;
return 0;
}
// Send/receive communication
void point2point(int* send_buf, int* result, int rank_id, int com_sz)
{
MPI_Status status;
// Exchange and store the data
for (int i=0; i<com_sz; i++){
if (i != rank_id){
MPI_Sendrecv(send_buf, BUFFER_SIZE, MPI_INT, i, 0,
result + i*BUFFER_SIZE, BUFFER_SIZE, MPI_INT, i, 0, MPI_COMM_WORLD, &status);
}
}
}
Here every rank contributes its own array intSendPack to the array result on all other ranks that should end up the same on all the ranks. result is flat, each rank takes BUFFER_SIZE entries starting with its rank_id*BUFFER_SIZE. After the point-to-point communication, the array is reset to its original shape.
Time is measured by setting up an MPI_Barrier which will give you the maximum time out of all ranks.
I ran the benchmark on 1 node of Nersc Cori KNL using slurm. I ran it a few times each case just to make sure the values are consistent and I'm not looking at an outlier, but you should run it maybe 10 or so times to collect more proper statistics.
Here are some thoughts:
For small number of processes (5) and a large buffer size (16384) collective communication is about twice faster than point-to-point, but it becomes about 4-5 times faster when moving to larger number of ranks (64).
In this benchmark there is not much difference between performance with recommended slurm settings on that specific machine and default settings but in real, larger programs with more communication there is a very significant one (job that runs for less than a minute with recommended will run for 20-30 min and more with default). Point of this is check your settings, it may make a difference.
What you were seeing with Send/Receive for larger messages was actually a deadlock. I saw it too for the message size shown in this benchmark. In case you missed those, there are two worth it SO posts on it: buffering explanation and a word on deadlocking.
In summary, adjust this benchmark to represent your code more closely and run it on your system, but collective communication in an all-to-all or one-to-all situations should be faster because of dedicated optimizations such as superior algorithms used for communication arrangement. A 2-5 times speedup is considerable, since communication often contributes to the overall time the most.
I am new to MPI. I want to send three ints to three slave nodes to create dynamic arrays, and each arrays will be send back to master. According to this post, I modified the code, and it's close to the right answer. But I got breakpoint when received array from slave #3 (m ==3) in receiver code. Thank you in advance!
My code is as follow:
#include <mpi.h>
#include <iostream>
#include <stdlib.h>
int main(int argc, char** argv)
{
int firstBreakPt, lateralBreakPt;
//int reMatNum1, reMatNum2;
int tmpN;
int breakPt[3][2]={{3,5},{6,9},{4,7}};
int myid, numprocs;
MPI_Status status;
// double *reMat1;
// double *reMat2;
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD,&myid);
MPI_Comm_size(MPI_COMM_WORLD,&numprocs);
tmpN = 15;
if (myid==0)
{
// send three parameters to slaves;
for (int i=1;i<numprocs;i++)
{
MPI_Send(&tmpN,1,MPI_INT,i,0,MPI_COMM_WORLD);
firstBreakPt = breakPt[i-1][0];
lateralBreakPt = breakPt[i-1][1];
//std::cout<<i<<" "<<breakPt[i-1][0] <<" "<<breakPt[i-1][1]<<std::endl;
MPI_Send(&firstBreakPt,1,MPI_INT,i,1,MPI_COMM_WORLD);
MPI_Send(&lateralBreakPt,1,MPI_INT,i,2,MPI_COMM_WORLD);
}
// receive arrays from slaves;
for (int m =1; m<numprocs; m++)
{
MPI_Probe(m, 3, MPI_COMM_WORLD, &status);
int nElems3, nElems4;
MPI_Get_elements(&status, MPI_DOUBLE, &nElems3);
// Allocate buffer of appropriate size
double *result3 = new double[nElems3];
MPI_Recv(result3,nElems3,MPI_DOUBLE,m,3,MPI_COMM_WORLD,&status);
std::cout<<"Tag is 3, ID is "<<m<<std::endl;
for (int ii=0;ii<nElems3;ii++)
{
std::cout<<result3[ii]<<std::endl;
}
MPI_Probe(m, 4, MPI_COMM_WORLD, &status);
MPI_Get_elements(&status, MPI_DOUBLE, &nElems4);
// Allocate buffer of appropriate size
double *result4 = new double[nElems4];
MPI_Recv(result4,nElems4,MPI_DOUBLE,m,4,MPI_COMM_WORLD,&status);
std::cout<<"Tag is 4, ID is "<<m<<std::endl;
for (int ii=0;ii<nElems4;ii++)
{
std::cout<<result4[ii]<<std::endl;
}
}
}
else
{
// receive three paramters from master;
MPI_Recv(&tmpN,1,MPI_INT,0,0,MPI_COMM_WORLD,&status);
MPI_Recv(&firstBreakPt,1,MPI_INT,0,1,MPI_COMM_WORLD,&status);
MPI_Recv(&lateralBreakPt,1,MPI_INT,0,2,MPI_COMM_WORLD,&status);
// width
int width1 = (rand() % (tmpN-firstBreakPt+1))+ firstBreakPt;
int width2 = (rand() % (tmpN-lateralBreakPt+1))+ lateralBreakPt;
// create dynamic arrays
double *reMat1 = new double[width1*width1];
double *reMat2 = new double[width2*width2];
for (int n=0;n<width1; n++)
{
for (int j=0;j<width1; j++)
{
reMat1[n*width1+j]=(double)rand()/RAND_MAX + (double)rand()/(RAND_MAX*RAND_MAX);
//a[i*Width+j]=1.00;
}
}
for (int k=0;k<width2; k++)
{
for (int h=0;h<width2; h++)
{
reMat2[k*width2+h]=(double)rand()/RAND_MAX + (double)rand()/(RAND_MAX*RAND_MAX);
//a[i*Width+j]=1.00;
}
}
// send it back to master
MPI_Send(reMat1,width1*width1,MPI_DOUBLE,0,3,MPI_COMM_WORLD);
MPI_Send(reMat2,width2*width2,MPI_DOUBLE,0,4,MPI_COMM_WORLD);
}
MPI_Finalize();
std::cin.get();
return 0;
}
P.S. This code is the right answer.
Use collective MPI operations, as Zulan suggested. For example, first thing your code does is that the root sends to all the slaves the same value, which is broadcasting, i.e.,MPI_Bcast(). Then, the root sends to each slave a different value, which is scatter, i.e., MPI_Scatter().
The last operation is that the slave processes send to the root variably-sized data, for which exists the MPI_Gatherv() function. However, to use this function, you need to:
allocate the incoming buffer by the root (there is no malloc() for reMat1 and reMat2 in the first if-branch of your code), therefore, the root needs to know their count,
tell MPI_Gatherv() on the root how many elements will be received from each slave and where to put them.
This problem can be easily solved by so-called parallel prefix, look at MPI_Scan() or MPI_Exscan().
Here you create randomized width
int width1 = (rand() % (tmpN-firstBreakPt+1))+ firstBreakPt;
int width2 = (rand() % (tmpN-lateralBreakPt+1))+ lateralBreakPt;
which you later use to send data back to process 0
MPI_Send(reMat1,width1*width1,MPI_DOUBLE,0,3,MPI_COMM_WORLD);
But it expects different number of
MPI_Recv(reMat1,firstBreakPt*tmpN*firstBreakPt*tmpN,MPI_DOUBLE,m,3,MPI_COMM_WORLD,&status);
which causes problems. It does not know what sizes each slave process generated so you have to send them back the same way you did for sending sizes to them.
The issue I am trying to resolve is the following:
The C++ serial code I have computes across a large 2D matrix. To optimize this process, I wish to split this large 2D matrix and run on 4 nodes (say) using MPI. The only communication that occurs between nodes is the sharing of edge values at the end of each time step. Every node shares the edge array data, A[i][j], with its neighbor.
Based on reading about MPI, I have the following scheme to be implemented.
if (myrank == 0)
{
for (i= 0 to x)
for (y= 0 to y)
{
C++ CODE IMPLEMENTATION
....
MPI_SEND(A[x][0], A[x][1], A[x][2], Destination= 1.....)
MPI_RECEIVE(B[0][0], B[0][1]......Sender = 1.....)
MPI_BARRIER
}
if (myrank == 1)
{
for (i = x+1 to xx)
for (y = 0 to y)
{
C++ CODE IMPLEMENTATION
....
MPI_SEND(B[x][0], B[x][1], B[x][2], Destination= 0.....)
MPI_RECEIVE(A[0][0], A[0][1]......Sender = 1.....)
MPI BARRIER
}
I wanted to know if my approach is correct and also would appreciate any guidance on other MPI functions too look into for implementation.
Thanks,
Ashwin.
Just to amplify Joel's points a bit:
This goes much easier if you allocate your arrays so that they're contiguous (something C's "multidimensional arrays" don't give you automatically:)
int **alloc_2d_int(int rows, int cols) {
int *data = (int *)malloc(rows*cols*sizeof(int));
int **array= (int **)malloc(rows*sizeof(int*));
for (int i=0; i<rows; i++)
array[i] = &(data[cols*i]);
return array;
}
/*...*/
int **A;
/*...*/
A = alloc_2d_init(N,M);
Then, you can do sends and recieves of the entire NxM array with
MPI_Send(&(A[0][0]), N*M, MPI_INT, destination, tag, MPI_COMM_WORLD);
and when you're done, free the memory with
free(A[0]);
free(A);
Also, MPI_Recv is a blocking recieve, and MPI_Send can be a blocking send. One thing that means, as per Joel's point, is that you definately don't need Barriers. Further, it means that if you have a send/recieve pattern as above, you can get yourself into a deadlock situation -- everyone is sending, no one is recieving. Safer is:
if (myrank == 0) {
MPI_Send(&(A[0][0]), N*M, MPI_INT, 1, tagA, MPI_COMM_WORLD);
MPI_Recv(&(B[0][0]), N*M, MPI_INT, 1, tagB, MPI_COMM_WORLD, &status);
} else if (myrank == 1) {
MPI_Recv(&(A[0][0]), N*M, MPI_INT, 0, tagA, MPI_COMM_WORLD, &status);
MPI_Send(&(B[0][0]), N*M, MPI_INT, 0, tagB, MPI_COMM_WORLD);
}
Another, more general, approach is to use MPI_Sendrecv:
int *sendptr, *recvptr;
int neigh = MPI_PROC_NULL;
if (myrank == 0) {
sendptr = &(A[0][0]);
recvptr = &(B[0][0]);
neigh = 1;
} else {
sendptr = &(B[0][0]);
recvptr = &(A[0][0]);
neigh = 0;
}
MPI_Sendrecv(sendptr, N*M, MPI_INT, neigh, tagA, recvptr, N*M, MPI_INT, neigh, tagB, MPI_COMM_WORLD, &status);
or nonblocking sends and/or recieves.
First you don't need that much barrier
Second, you should really send your data as a single block as multiple send/receive blocking their way will result in poor performances.
This question has already been answered quite thoroughly by Jonathan Dursi; however, as Jonathan Leffler has pointed out in his comment to Jonathan Dursi's answer, C's multi-dimensional arrays are a contiguous block of memory. Therefore, I would like to point out that for a not-too-large 2d array, a 2d array could simply be created on the stack:
int A[N][M];
Since, the memory is contiguous, the array can be sent as it is:
MPI_Send(A, N*M, MPI_INT,1, tagA, MPI_COMM_WORLD);
On the receiving side, the array can be received into a 1d array of size N*M (which can then be copied into a 2d array if necessary):
int A_1d[N*M];
MPI_Recv(A_1d, N*M, MPI_INT,0,tagA, MPI_COMM_WORLD,&status);
//copying the array to a 2d-array
int A_2d[N][M];
for (int i = 0; i < N; i++){
for (int j = 0; j < M; j++){
A_2d[i][j] = A_1d[(i*M)+j]
}
}
Copying the array does cause twice the memory to be used, so it would be better to simply use A_1d by accessing its elements through A_1d[(i*M)+j].