Dynamic load balancing master-worker - c++

I have an array of index which I want each worker do something based on these indexes.
the size of the array might be more than the total number of ranks, so my first question is if there is another way except master-worker load balancing here? I want to have a balances system and also I want to assign each index to each ranks.
I was thinking about master-worker, and in this approach master rank (0) is giving each index to other ranks. but when I was running my code with 3 rank and 15 index my code is halting in while loop for sending the index 4. I was wondering If anybody can help me to find the problem
if(pCurrentID == 0) { // Master
MPI_Status status;
int nindices = 15;
int mesg[1] = {0};
int initial_id = 0;
int recv_mesg[1] = {0};
// -- send out initial ids to workers --//
while (initial_id < size - 1) {
if (initial_id < nindices) {
MPI_Send(mesg, 1, MPI_INT, initial_id + 1, 1, MPI_COMM_WORLD);
mesg[0] += 1;
++initial_id;
}
}
//-- hand out id to workers dynamically --//
while (mesg[0] != nindices) {
MPI_Probe(MPI_ANY_SOURCE, 1, MPI_COMM_WORLD, &status);
int isource = status.MPI_SOURCE;
MPI_Recv(recv_mesg, 1, MPI_INT, isource, 1, MPI_COMM_WORLD, &status);
MPI_Send(mesg, 1, MPI_INT, isource, 1, MPI_COMM_WORLD);
mesg[0] += 1;
}
//-- hand out ending signals once done --//
for (int rank = 1; rank < size; ++rank) {
mesg[0] = -1;
MPI_Send(mesg, 1, MPI_INT, rank, 0, MPI_COMM_WORLD);
}
} else {
MPI_Status status;
int id[1] = {0};
// Get the surrounding fragment id
MPI_Probe(MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
int itag = status.MPI_TAG;
MPI_Recv(id, 1, MPI_INT, 0, itag, MPI_COMM_WORLD, &status);
int jfrag = id[0];
if (jfrag < 0) break;
// do something
MPI_Send(id, 1, MPI_INT, 0, 1, MPI_COMM_WORLD);
}

I have an array of index which I want each worker do something based
on these indexes. the size of the array might be more than the total
number of ranks, so my first question is if there is another way
except master-worker load balancing here? I want to have a balances
system and also I want to assign each index to each ranks.
No, but if the work performed per array index takes roughly the same amount of time, you can simply scatter the array among the processes.
I was thinking about master-worker, and in this approach master rank
(0) is giving each index to other ranks. but when I was running my
code with 3 rank and 15 index my code is halting in while loop for
sending the index 4. I was wondering If anybody can help me to find
the problem
As already pointed out in the comments, the problem is that you are missing (in the workers side) the loop of querying the master for work.
The load-balancer can be implemented as follows:
The master initial sends an iteration to the other workers;
Each worker waits for a message from the master;
Afterwards the master calls MPI_Recv from MPI_ANY_SOURCE and waits for another worker to request work;
After the worker finished working on its first iteration it sends its rank to the master, signaling the master to send a new iteration;
The master reads the rank sent by the worker in step 4., checks the array for a new index, and if there is still a valid index, send it to the worker. Otherwise, sends a special message signaling the worker that there is no more work to be performed. That message can be for instance -1;
When the worker receive the special message it stops working;
The master stops working when all the workers have receive the special message.
An example of such approach:
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
int main(int argc,char *argv[]){
MPI_Init(NULL,NULL); // Initialize the MPI environment
int rank;
int size;
MPI_Status status;
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
MPI_Comm_size(MPI_COMM_WORLD,&size);
int work_is_done = -1;
if(rank == 0){
int max_index = 10;
int index_simulator = 0;
// Send statically the first iterations
for(int i = 1; i < size; i++){
MPI_Send(&index_simulator, 1, MPI_INT, i, i, MPI_COMM_WORLD);
index_simulator++;
}
int processes_finishing_work = 0;
do{
int process_that_wants_work = 0;
MPI_Recv(&process_that_wants_work, 1, MPI_INT, MPI_ANY_SOURCE, 1, MPI_COMM_WORLD, &status);
if(index_simulator < max_index){
MPI_Send(&index_simulator, 1, MPI_INT, process_that_wants_work, 1, MPI_COMM_WORLD);
index_simulator++;
}
else{ // send special message
MPI_Send(&work_is_done, 1, MPI_INT, process_that_wants_work, 1, MPI_COMM_WORLD);
processes_finishing_work++;
}
} while(processes_finishing_work < size - 1);
}
else{
int index_to_work = 0;
MPI_Recv(&index_to_work, 1, MPI_INT, 0, rank, MPI_COMM_WORLD, &status);
// Work with the iterations index_to_work
do{
MPI_Send(&rank, 1, MPI_INT, 0, 1, MPI_COMM_WORLD);
MPI_Recv(&index_to_work, 1, MPI_INT, 0, 1, MPI_COMM_WORLD, &status);
if(index_to_work != work_is_done)
// Work with the iterations index_to_work
}while(index_to_work != work_is_done);
}
printf("Process {%d} -> I AM OUT\n", rank);
MPI_Finalize();
return 0;
}
You can improve upon the aforementioned approach by reducing: 1) the number of messages sent and 2) the time waiting for them. For the former you can try to use a chunking strategy (i.e., sending more than one index per MPI communication). For the latter you can try to play around with nonblocking MPI communications or have two threads per process one to receive/send the work another to actually perform the work. This multithreading approach would also allow the master process to actually work with the array indices, but it significantly complicates the approach.

Related

MPI_send MPI_recv failed while increase the arry size

I am trying to write a 3D parallel computing Poisson solver using OpenMPI ver 1.6.4.
The following parts are my code for parallel computing using blocking send receive.
The following variable is declared in another file.
int px = lx*meshx; //which is meshing point in x axis.
int py = ly*meshy;
int pz = lz*meshz;
int L = px * py * pz
The following code works well while
lx=ly=lz=10;
meshx=meshy=2, meshz=any int number.
The send recv parts failed while meshx and meshy are larger than 4.
The program hanging there waiting for sending or receiving data.
But it works if I only send data from one processor to another, not exchange the data.
(ie : send from rank 0 to 1, but dont send from 1 to 0)
I can't understand how this code works while meshx and meshy is small but failed while mesh number x y in large.
Does blocking send receive process will interrupt itself or I confuse the processor in my code?Does it matter with my array size?
#include "MPI-practice.h"
# include <iostream>
# include <math.h>
# include <string.h>
# include <time.h>
# include <sstream>
# include <string>
# include "mpi.h"
using namespace std;
extern int px,py,pz;
extern int L;
extern double simTOL_phi;
extern vector<double> phi;
int main(int argc, char *argv[]){
int numtasks, taskid, offset_A, offset_B, DD_loop,s,e;
double errPhi(0),errPhi_sum(0);
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &numtasks);
MPI_Comm_rank(MPI_COMM_WORLD, &taskid);
MPI_Status status;
if((pz-1)%numtasks!=0){
//cerr << "can not properly divide meshing points."<<endl;
exit(0);
}
offset_A=(pz-1)/numtasks*px*py;
offset_B=((pz-1)/numtasks+1)*px*py;
s=offset_A*taskid;
e=offset_A*taskid+offset_B;
int pz_offset_A=(pz-1)/numtasks;
int pz_offset_B=(pz-1)/numtasks+1;
stringstream name1;
string name2;
Setup_structure();
Initialize();
Build_structure();
if (taskid==0){
//master processor
ofstream output;
output.open("time", fstream::out | fstream::app);
output.precision(6);
clock_t start,end;
start=clock();
do{
errPhi_sum=0;
errPhi=Poisson_inner(taskid,numtasks,pz_offset_A,pz_offset_B);
//Right exchange
MPI_Send(&phi[e-px*py], px*py, MPI_DOUBLE, taskid+1, 1, MPI_COMM_WORLD);
MPI_Recv(&phi[e], px*py, MPI_DOUBLE, taskid+1, 1, MPI_COMM_WORLD, &status);
MPI_Allreduce ( &errPhi, &errPhi_sum, 1, MPI_DOUBLE, MPI_SUM, MPI_COMM_WORLD );
}while(errPhi_sum>simTOL_phi);
end=clock();
output << "task "<< 0 <<" = "<< (end-start)/CLOCKS_PER_SEC <<endl<<endl;
Print_to_file("0.txt");
//recv from slave
for (int i=1;i<numtasks;i++){
MPI_Recv(&phi[offset_A*i], offset_B, MPI_DOUBLE, i, 1, MPI_COMM_WORLD, &status);
}
Print_to_file("sum.txt");
}
else{
//slave processor
do{
errPhi=Poisson_inner(taskid,numtasks,pz_offset_A,pz_offset_B);
//Left exchange
MPI_Send(&phi[s+px*py], px*py, MPI_DOUBLE, taskid-1, 1, MPI_COMM_WORLD);
MPI_Recv(&phi[s], px*py, MPI_DOUBLE, taskid-1, 1, MPI_COMM_WORLD, &status);
//Right exchange
if(taskid!=numtasks-1){
MPI_Send(&phi[e-px*py], px*py, MPI_DOUBLE, taskid+1, 1, MPI_COMM_WORLD);
MPI_Recv(&phi[e], px*py, MPI_DOUBLE, taskid+1, 1, MPI_COMM_WORLD, &status);
}
MPI_Allreduce ( &errPhi, &errPhi_sum, 1, MPI_DOUBLE, MPI_SUM, MPI_COMM_WORLD );
}while(errPhi_sum>simTOL_phi);
//send back master
MPI_Send(&phi[s], offset_B, MPI_DOUBLE, 0, 1, MPI_COMM_WORLD);
name1<<taskid<<".txt";
name2=name1.str();
Print_to_file(name2.c_str());
}
MPI_Finalize();
}
Replace all coupled MPI_Send/MPI_Recv calls with a calls to MPI_Sendrecv. For example, this
MPI_Send(&phi[e-px*py], px*py, MPI_DOUBLE, taskid+1, 1, MPI_COMM_WORLD);
MPI_Recv(&phi[e], px*py, MPI_DOUBLE, taskid+1, 1, MPI_COMM_WORLD, &status);
becomes
MPI_Sendrecv(&phi[e-px*py], px*py, MPI_DOUBLE, taskid+1, 1,
&phi[e], px*px, MPI_DOUBLE, taskid+1, 1,
MPI_COMM_WORLD, &status);
MPI_Sendrecv uses non-blocking operations internally and thus it does not deadlock, even if two ranks are sending to each other at the same time. The only requirement (as usual) is that each send is matched by a receive.
The problem is in your inner most loop. Both tasks do a blocking send at the same time, which then hangs. It doesn't hang with smaller data sets, as the MPI library has a big enough buffer to hold the data. But once you increase that beyond the buffer size, the send blocks both processes. Since neither process are trying to receive, neither buffer can empty and the program deadlocks.
To fix it, have the slave first receive from the master, then send data back. If your send/receive don't conflict, you can switch the order of the functions. Otherwise you need to create a temporary buffer to hold it.

MPI implementation to verify

I am trying to use MPI_Gather to recover data from slave. So basically, a simulation are running on each slave (wich is not the same on each), and I want to recover one integer on the master (the results of the simulation). From each integer, I calculate a new value 'a' on the master that I send back to the slave to redo a new simulation with this better parameter.
I hope is is clear, I am pretty new to MPI.
Note: Some simulation will not finish at the same time.
int main
while(true){
if (rank==0) runMaster();
else runSlave();
}
runMaster()
receive data b of all slave (with MPI_gather)
calculate parameter a for each slave; aTotal=[a_1,...,a_n]
MPI_Scatter(aTotal, to slave)
runSlave()
a=aTotal[rank]
simulationRun(a){return b}
MPI_Gather(&b, to master)
To avoid the deadlock, each slave is initialized with a random a.
created a small test case, because I don't see how I can use MPI_Gather in my slave:
int main (int argc, char *argv[]) {
int size;
int rank;
int a[12];
int i;
int start,end;
int b;
MPI_Init(&argc, &argv);
MPI_Status status;
MPI_Request req;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
int* bb= new int[size];
int source;
//master
if(!rank){
while(true){
b=12;
MPI_Recv(&bb[0], 1, MPI_INT, MPI_ANY_SOURCE, 0, MPI_COMM_WORLD, &status);
source = status.MPI_SOURCE;
printf("master receive b %d from source %d \n", bb[0], source);
if (source == 1) goto finish;
}
}
//slave
if(rank){·
b=13;·
if (rank==1) {b=15; sleep(2);}
int source = rank;
printf("slave %d will send b %d \n", source, b);
// MPI_Gather(&b,1,MPI_INT,bb,1,MPI_INT,0,MPI_COMM_WORLD); // unworking, not called by master
MPI_Send(&b, 1, MPI_INT, 0, 0, MPI_COMM_WORLD);
}
finish:
MPI_Finalize();
return 0;
}
I am trying to send the slave data to the master with a collective command.
Is this implementation realistic?
What you propose sounds reasonable. Another approach would be to have the slaves all broadcast their results to each other in one go (MPI_AllGather), then you can implement the scoring and what-to-try-next algorithm directly in each slave. If the scoring algorithm is not too complex, the overhead of running it in every slave will be worth it in terms of speed because the slaves will not have to communicate with the master at all, saving one communication on each iteration.

MPI_Isend reusing internal buffer

I have a Finite Element code that uses blocking receives and non-blocking sends. Each element has 3 incoming faces and 3 outgoing faces. The mesh is split up among many processors, so sometimes the boundary conditions come from the elements processor, or from neighboring processors. Relevant parts of the code are:
std::vector<task>::iterator it = All_Tasks.begin();
std::vector<task>::iterator it_end = All_Tasks.end();
int task = 0;
for (; it != it_end; it++, task++)
{
for (int f = 0; f < 3; f++)
{
// Get the neighbors for each incoming face
Neighbor neighbor = subdomain.CellSets[(*it).cellset_id_loc].neighbors[incoming[f]];
// Get buffers from boundary conditions or neighbor processors
if (neighbor.processor == rank)
{
subdomain.Set_buffer_from_bc(incoming[f]);
}
else
{
// Get the flag from the corresponding send
target = GetTarget((*it).angle_id, (*it).group_id, (*it).cell_id);
if (incoming[f] == x)
{
int size = cells_y*cells_z*groups*angles*4;
MPI_Status status;
MPI_Recv(&subdomain.X_buffer[0], size, MPI_DOUBLE, neighbor.processor, target, MPI_COMM_WORLD, &status);
}
if (incoming[f] == y)
{
int size = cells_x*cells_z*groups*angles * 4;
MPI_Status status;
MPI_Recv(&subdomain.Y_buffer[0], size, MPI_DOUBLE, neighbor.processor, target, MPI_COMM_WORLD, &status);
}
if (incoming[f] == z)
{
int size = cells_x*cells_y*groups*angles * 4;
MPI_Status status;
MPI_Recv(&subdomain.Z_buffer[0], size, MPI_DOUBLE, neighbor.processor, target, MPI_COMM_WORLD, &status);
}
}
}
... computation ...
for (int f = 0; f < 3; f++)
{
// Get the outgoing neighbors for each face
Neighbor neighbor = subdomain.CellSets[(*it).cellset_id_loc].neighbors[outgoing[f]];
if (neighbor.IsOnBoundary)
{
// store the buffer into the boundary information
}
else
{
target = GetTarget((*it).angle_id, (*it).group_id, neighbor.cell_id);
if (outgoing[f] == x)
{
int size = cells_y*cells_z*groups*angles * 4;
MPI_Request request;
MPI_Isend(&subdomain.X_buffer[0], size, MPI_DOUBLE, neighbor.processor, target, MPI_COMM_WORLD, &request);
}
if (outgoing[f] == y)
{
int size = cells_x*cells_z*groups*angles * 4;
MPI_Request request;
MPI_Isend(&subdomain.Y_buffer[0], size, MPI_DOUBLE, neighbor.processor, target, MPI_COMM_WORLD, &request);
}
if (outgoing[f] == z)
{
int size = cells_x*cells_y*groups*angles * 4;
MPI_Request request;
MPI_Isend(&subdomain.Z_buffer[0], size, MPI_DOUBLE, neighbor.processor, target, MPI_COMM_WORLD, &request);
}
}
}
}
A processor can do a lot of tasks before it needs information from other processors. I need a non-blocking send so that the code can keep working, but I'm pretty sure the receives are overwriting the send buffers before they get sent.
I've tried timing this code, and it's taking 5-6 seconds for the call to MPI_Recv, even though the message it's trying to receive has been sent. My theory is that the Isend is starting, but not actually sending anything until the Recv is called. The message itself is on the order of 1 MB. I've looked at benchmarks and messages of this size should take a very small fraction of a second to send.
My question is, in this code, is the buffer that was sent being overwritten, or just the local copy? Is there a way to 'add' to a buffer when I'm sending, rather than writing to the same memory location? I want the Isend to write to a different buffer every time it's called so the information isn't being overwritten while the messages wait to be received.
** EDIT **
A related question that might fix my problem: Can MPI_Test or MPI_Wait give information about an MPI_Isend writing to a buffer, i.e. return true if the Isend has written to the buffer, but that buffer has yet to be received?
** EDIT 2 **
I have added more information about my problem.
So it looks like I just have to bite the bullet and allocate enough memory in the send buffers to accommodate all the messages, and then just send portions of the buffer when I send.

How actually MPI_Waitall works

Does this example contradict the manual? The manual states that both the array of requests and the array of statuses must be of the same size. To be more precise, both arrays should be at least as long as it indicated by the count argument. Yet in the below example status array size is 2, not 4. Also, the example contradicts this statement from the manual
The error-free execution of MPI_Waitall(count, array_of_requests,
array_of_statuses) has the same effect as the execution of
MPI_Wait(&array_of_request[i], &array_of_statuses[i]), for
i=0,...,count-1, in some arbitrary order.
#include "mpi.h"
#include <stdio.h>
int main(argc,argv)
int argc;
char *argv[]; {
int numtasks, rank, next, prev, buf[2], tag1=1, tag2=2;
MPI_Request reqs[4];
MPI_Status stats[2];
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD, &numtasks);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
prev = rank-1;
next = rank+1;
if (rank == 0) prev = numtasks - 1;
if (rank == (numtasks - 1)) next = 0;
MPI_Irecv(&buf[0], 1, MPI_INT, prev, tag1, MPI_COMM_WORLD, &reqs[0]);
MPI_Irecv(&buf[1], 1, MPI_INT, next, tag2, MPI_COMM_WORLD, &reqs[1]);
MPI_Isend(&rank, 1, MPI_INT, prev, tag2, MPI_COMM_WORLD, &reqs[2]);
MPI_Isend(&rank, 1, MPI_INT, next, tag1, MPI_COMM_WORLD, &reqs[3]);
{ do some work }
MPI_Waitall(4, reqs, stats);
MPI_Finalize();
}
P.S. Definition of main looks strange. The return value is missing. Is it prehistoric C or typo?
Yes, this example contradicts the manual. If you compare the example with the Fortran version, you'll see that the Fortran version is correct in that the status array is large enough (strangely enough, it's a 2D array but thanks to implicit interfaces and storage association it can be seen as a 1D array of size MPI_STATUS_SIZE * 2 which is larger than 4 provided MPI_STATUS_SIZE is larger than 1 (on my system it's 5).
And yes, the missing return statement is an error; however some compilers resort to just emitting a warning for omitting the return statement in main(). Also, the prehistoricity of the code can be seen in the K&R style declaration of the arguments.

Sending and receiving 2D array over MPI

The issue I am trying to resolve is the following:
The C++ serial code I have computes across a large 2D matrix. To optimize this process, I wish to split this large 2D matrix and run on 4 nodes (say) using MPI. The only communication that occurs between nodes is the sharing of edge values at the end of each time step. Every node shares the edge array data, A[i][j], with its neighbor.
Based on reading about MPI, I have the following scheme to be implemented.
if (myrank == 0)
{
for (i= 0 to x)
for (y= 0 to y)
{
C++ CODE IMPLEMENTATION
....
MPI_SEND(A[x][0], A[x][1], A[x][2], Destination= 1.....)
MPI_RECEIVE(B[0][0], B[0][1]......Sender = 1.....)
MPI_BARRIER
}
if (myrank == 1)
{
for (i = x+1 to xx)
for (y = 0 to y)
{
C++ CODE IMPLEMENTATION
....
MPI_SEND(B[x][0], B[x][1], B[x][2], Destination= 0.....)
MPI_RECEIVE(A[0][0], A[0][1]......Sender = 1.....)
MPI BARRIER
}
I wanted to know if my approach is correct and also would appreciate any guidance on other MPI functions too look into for implementation.
Thanks,
Ashwin.
Just to amplify Joel's points a bit:
This goes much easier if you allocate your arrays so that they're contiguous (something C's "multidimensional arrays" don't give you automatically:)
int **alloc_2d_int(int rows, int cols) {
int *data = (int *)malloc(rows*cols*sizeof(int));
int **array= (int **)malloc(rows*sizeof(int*));
for (int i=0; i<rows; i++)
array[i] = &(data[cols*i]);
return array;
}
/*...*/
int **A;
/*...*/
A = alloc_2d_init(N,M);
Then, you can do sends and recieves of the entire NxM array with
MPI_Send(&(A[0][0]), N*M, MPI_INT, destination, tag, MPI_COMM_WORLD);
and when you're done, free the memory with
free(A[0]);
free(A);
Also, MPI_Recv is a blocking recieve, and MPI_Send can be a blocking send. One thing that means, as per Joel's point, is that you definately don't need Barriers. Further, it means that if you have a send/recieve pattern as above, you can get yourself into a deadlock situation -- everyone is sending, no one is recieving. Safer is:
if (myrank == 0) {
MPI_Send(&(A[0][0]), N*M, MPI_INT, 1, tagA, MPI_COMM_WORLD);
MPI_Recv(&(B[0][0]), N*M, MPI_INT, 1, tagB, MPI_COMM_WORLD, &status);
} else if (myrank == 1) {
MPI_Recv(&(A[0][0]), N*M, MPI_INT, 0, tagA, MPI_COMM_WORLD, &status);
MPI_Send(&(B[0][0]), N*M, MPI_INT, 0, tagB, MPI_COMM_WORLD);
}
Another, more general, approach is to use MPI_Sendrecv:
int *sendptr, *recvptr;
int neigh = MPI_PROC_NULL;
if (myrank == 0) {
sendptr = &(A[0][0]);
recvptr = &(B[0][0]);
neigh = 1;
} else {
sendptr = &(B[0][0]);
recvptr = &(A[0][0]);
neigh = 0;
}
MPI_Sendrecv(sendptr, N*M, MPI_INT, neigh, tagA, recvptr, N*M, MPI_INT, neigh, tagB, MPI_COMM_WORLD, &status);
or nonblocking sends and/or recieves.
First you don't need that much barrier
Second, you should really send your data as a single block as multiple send/receive blocking their way will result in poor performances.
This question has already been answered quite thoroughly by Jonathan Dursi; however, as Jonathan Leffler has pointed out in his comment to Jonathan Dursi's answer, C's multi-dimensional arrays are a contiguous block of memory. Therefore, I would like to point out that for a not-too-large 2d array, a 2d array could simply be created on the stack:
int A[N][M];
Since, the memory is contiguous, the array can be sent as it is:
MPI_Send(A, N*M, MPI_INT,1, tagA, MPI_COMM_WORLD);
On the receiving side, the array can be received into a 1d array of size N*M (which can then be copied into a 2d array if necessary):
int A_1d[N*M];
MPI_Recv(A_1d, N*M, MPI_INT,0,tagA, MPI_COMM_WORLD,&status);
//copying the array to a 2d-array
int A_2d[N][M];
for (int i = 0; i < N; i++){
for (int j = 0; j < M; j++){
A_2d[i][j] = A_1d[(i*M)+j]
}
}
Copying the array does cause twice the memory to be used, so it would be better to simply use A_1d by accessing its elements through A_1d[(i*M)+j].