We only have been working on MPI for about one day in my computer programming class, and I now have to write a program for it. I am to write a program that organizes processes into two rings.
The first ring begins with process 0 and proceeds to send a message to the next even process and the last process sending its message back to process 0. For example, 0--> 2 --> 4 --> 6 --> 8 --> 0 (but it goes all the way up to 32 instead of 8). The next ring is the same, but begins with process 1 and sends to the previous off process and then back to 1. For example, 1--> 9--> 7--> 5 --> 3--> 1.
Also, I am supposed to find the max, min, and average of a very large array of integer numbers. I will have to scatter the array into pieces to each process, have each process compute a partial answer, and then reduce back the answer together on process 0 after everyone is done.
Finally, I am to scatter across the processes and each process will have to count how many of each letter appears in a section. That part really makes no sense to me. But we have just learned the very basics, so no fancy stuff please! Here's what I have so far, I have commented out some things to just remind myself of some stuff, so please ignore if necessary.
#include <iostream>
#include "mpi.h"
using namespace std;
// compile: mpicxx program.cpp
// run: mpirun -np 4 ./a.out
int main(int argc, char *argv[])
{
int rank; // unique number associated with each core
int size; // total number of cores
char message[80];
char recvd[80];
int prev_node, next_node;
int tag;
MPI_Status status;
// start MPI interface
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
sprintf(message, "Heeeelp! from %d", rank);
MPI_Barrier(MPI_COMM_WORLD);
next_node = (rank + 2) % size;
prev_node = (size + rank - 2) % size;
tag = 0;
if (rank % 2) {
MPI_Send(&message, 80, MPI_CHAR, prev_node, tag, MPI_COMM_WORLD);
MPI_Recv(&recvd, 80, MPI_CHAR, next_node, tag, MPI_COMM_WORLD, &status);
} else {
MPI_Send(&message, 80, MPI_CHAR, next_node, tag, MPI_COMM_WORLD);
MPI_Recv(&recvd, 80, MPI_CHAR, prev_node, tag, MPI_COMM_WORLD, &status);
}
cout << "* Rank " << rank << ": " << recvd << endl;
//max
int large_array[100];
rank == 0;
int max = 0;
MPI_Scatter(&large_array, 1, MPI_INT, large_array, 1, MPI_INT, 0, MPI_COMM_WORLD);
MPI_Reduce(&message, max, 1, MPI_INT, MPI_MAX, 0, MPI_COMM_WORLD);
MPI_Finalize();
return 0;
}
I have a small suggestion about this:
dest = rank + 2;
if (rank == size - 1)
dest = 0;
source = rank - 2;
if (rank == 0)
source = size - 1;
I think dest and source, as names, are going to be confusing (as both are destinations of messages, depending on the value of rank). Using the % operator might help improve clarity:
next_node = (rank + 2) % size;
prev_node = (size + rank - 2) % size;
You can select whether to receive or send to next_node and prev_node based on the value of rank % 2:
if (rank % 2) {
MPI_Send(&message, 80, MPI_CHAR, prev_node, tag, MPI_COMM_WORLD);
MPI_Recv(&message, 80, MPI_CHAR, next_node, tag, MPI_COMM_WORLD, &status);
} else {
MPI_Send(&message, 80, MPI_CHAR, next_node, tag, MPI_COMM_WORLD);
MPI_Recv(&message, 80, MPI_CHAR, prev_node, tag, MPI_COMM_WORLD, &status);
}
Doing this once or twice is fine, but if you find your code littered with these sorts of switches, it'd make sense to place these ring routines in a function and pass in the next and previous nodes as parameters.
When it comes time to distribute your arrays of numbers and arrays of characters, keep in mind that n / size will leave a remainder of n % size elements at the end of your array that also need to be handled. (Probably on the master node, just for simplicity.)
I added a few more output statements (and a place to store the message from the other nodes) and the simple rings program works as expected:
$ mpirun -np 16 ./a.out | sort -k3n
* Rank 0: Heeeelp! from 14
* Rank 1: Heeeelp! from 3
* Rank 2: Heeeelp! from 0
* Rank 3: Heeeelp! from 5
* Rank 4: Heeeelp! from 2
* Rank 5: Heeeelp! from 7
* Rank 6: Heeeelp! from 4
* Rank 7: Heeeelp! from 9
* Rank 8: Heeeelp! from 6
* Rank 9: Heeeelp! from 11
* Rank 10: Heeeelp! from 8
* Rank 11: Heeeelp! from 13
* Rank 12: Heeeelp! from 10
* Rank 13: Heeeelp! from 15
* Rank 14: Heeeelp! from 12
* Rank 15: Heeeelp! from 1
You can see the two rings there, each in their own direction:
#include <iostream>
#include "mpi.h"
using namespace std;
// compile: mpicxx program.cpp
// run: mpirun -np 4 ./a.out
int main(int argc, char *argv[])
{
int rank; // unique number associated with each core
int size; // total number of cores
char message[80];
char recvd[80];
int prev_node, next_node;
int tag;
MPI_Status status;
// start MPI interface
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
sprintf(message, "Heeeelp! from %d", rank);
// cout << "Rank " << rank << ": " << message << endl;
MPI_Barrier(MPI_COMM_WORLD);
next_node = (rank + 2) % size;
prev_node = (size + rank - 2) % size;
tag = 0;
if (rank % 2) {
MPI_Send(&message, 80, MPI_CHAR, prev_node, tag, MPI_COMM_WORLD);
MPI_Recv(&recvd, 80, MPI_CHAR, next_node, tag, MPI_COMM_WORLD, &status);
} else {
MPI_Send(&message, 80, MPI_CHAR, next_node, tag, MPI_COMM_WORLD);
MPI_Recv(&recvd, 80, MPI_CHAR, prev_node, tag, MPI_COMM_WORLD, &status);
}
cout << "* Rank " << rank << ": " << recvd << endl;
//cout << "After - Rank " << rank << ": " << message << endl;
// end MPI interface
MPI_Finalize();
return 0;
}
When it comes time to write the larger programs (array min, max, avg, and word counts), you'll need to change things slightly: only rank == 0 will be sending messages at the start; it will send to all the other processes their pieces of the puzzle. All the other processes will receive, do the work, then send back the results. rank == 0 will then need to integrate the results from all of them into a coherent single answer.
Related
I am trying to write a C++ program by using MPI, in which each rank will send a matrix to rank 0. When the matrix size is relatively small, the code works perfectly. However, when the matrix size becomes big. The code starts to give strange error which will only happen when I use specific amount of CPUs.
If you feel the full code is too long, please directly go down to the minimum example below.
To avoid overlooking some part, I give the full source code here:
#include <iostream>
#include <mpi.h>
#include <cmath>
int world_size;
int world_rank;
MPI_Comm comm;
int m, m_small, m_small2;
int index(int row, int column)
{
return m * row + column;
}
int index3(int row, int column)
{
return m_small2 * row + column;
}
int main(int argc, char **argv) {
MPI_Init(&argc, &argv);
MPI_Status status;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
m = atoi(argv[1]); //Size
int ndims = 2;
int *dims = new int[ndims];
int *period = new int[ndims];
int *coords = new int[ndims];
for (int i=0; i<ndims; i++) dims[i] = 0;
for (int i=0; i<ndims; i++) period[i] = 0;
for (int i=0; i<ndims; i++) coords[i] = 0;
MPI_Dims_create(world_size, ndims, dims);
MPI_Cart_create(MPI_COMM_WORLD, ndims, dims, period, 0, &comm);
MPI_Cart_coords(comm, world_rank, ndims, coords);
double *a, *a_2;
if (0 == world_rank) {
a = new double [m*m];
for (int i=0; i<m; i++) {
for (int j=0; j<m; j++) {
a[index(i,j)] = 0;
}
}
}
/*m_small is along the vertical direction, m_small2 is along the horizental direction*/
//The upper cells will take the reminder of total lattice point along vertical direction divided by the cell number along that direction
if (0 == coords[0]){
m_small = int(m / dims[0]) + m % dims[0];
}
else m_small = int(m / dims[0]);
//The left cells will take the reminder of total lattice point along horizental direction divided by the cell number along that direction
if (0 == coords[1]) {
m_small2 = int(m / dims[1]) + m % dims[1];
}
else m_small2 = int(m / dims[1]);
double *a_small = new double [m_small * m_small2];
/*Initialization of matrix*/
for (int i=0; i<m_small; i++) {
for (int j=0; j<m_small2; j++) {
a_small[index3(i,j)] = 2.5 ;
}
}
if (0 == world_rank) {
a_2 = new double[m_small*m_small2];
for (int i=0; i<m_small; i++) {
for (int j=0; j<m_small2; j++) {
a_2[index3(i,j)] = 0;
}
}
}
int loc[2];
int m1_rec, m2_rec;
MPI_Request send_req;
MPI_Isend(coords, 2, MPI_INT, 0, 1, MPI_COMM_WORLD, &send_req);
//This Isend may have problem!
MPI_Isend(a_small, m_small*m_small2, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD, &send_req);
if (0 == world_rank) {
for (int i = 0; i < world_size; i++) {
MPI_Recv(loc, 2, MPI_INT, i, 1, MPI_COMM_WORLD, MPI_STATUSES_IGNORE);
/*Determine the size of matrix for receiving the information*/
if (0 == loc[0]) {
m1_rec = int(m / dims[0]) + m % dims[0];
} else {
m1_rec = int(m / dims[0]);
}
if (0 == loc[1]) {
m2_rec = int(m / dims[1]) + m % dims[1];
} else {
m2_rec = int(m / dims[1]);
}
//This receive may have problem!
MPI_Recv(a_2, m1_rec * m2_rec, MPI_DOUBLE, i, 0, MPI_COMM_WORLD, MPI_STATUSES_IGNORE);
}
}
delete[] a_small;
if (0 == world_rank) {
delete[] a;
delete[] a_2;
}
delete[] dims;
delete[] period;
delete[] coords;
MPI_Finalize();
return 0;
}
Basically, the code reads an input value m, and then construct a big matrix of m x m. MPI creates a 2D topology according to the number of CPUs, which divide the big matrix to sub-matrix. The size of the sub-matrix is m_small x m_small2. There should be no problem in these steps.
The problem happens when I send the sub-matrix in each rank to rank-0 using MPI_Isend(a_small, m_small*m_small2, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD, &send_req); and MPI_Recv(a_2, m1_rec * m2_rec, MPI_DOUBLE, i, 0, MPI_COMM_WORLD, MPI_STATUSES_IGNORE);.
For example, when I run the code by this command: mpirun -np 2 ./a.out 183, I will get the error of
Read -1, expected 133224, errno = 14
*** Process received signal ***
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: 0x7fb23b485010
--------------------------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 1 with PID 0 on node dx1-500-24164 exited on signal 11 (Segmentation fault).
Strangely, If I modify the CPU number or decrease the value of input argument, the problem is not there anymore. Also, If I just comment out the MPI_Isend/Recv, there is no problem either.
So I am really wondering how to solve this problem?
Edit.1
The minimum example to reproduce the problem.
When the size of matrix is small, there is no problem. But problem comes when you increase the size of matrix (at least for me):
#include <iostream>
#include <mpi.h>
#include <cmath>
int world_size;
int world_rank;
MPI_Comm comm;
int m, m_small, m_small2;
int main(int argc, char **argv) {
MPI_Init(&argc, &argv);
MPI_Status status;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
m = atoi(argv[1]); //Size
double *a_2;
//Please increase the size of m_small and m_small2 and wait for the problem to happen
m_small = 100;
m_small2 = 200;
double *a_small = new double [m_small * m_small2];
if (0 == world_rank) {
a_2 = new double[m_small*m_small2];
}
MPI_Request send_req;
MPI_Isend(a_small, m_small*m_small2, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD, &send_req);
if (0 == world_rank) {
for (int i = 0; i < world_size; i++) {
MPI_Recv(a_2, m_small*m_small2, MPI_DOUBLE, i, 0, MPI_COMM_WORLD, MPI_STATUSES_IGNORE);
}
}
delete[] a_small;
if (0 == world_rank) {
delete[] a_2;
}
MPI_Finalize();
return 0;
}
Command to run mpirun -np 2 ./a.out 183 (The input argument is actually not used the by code this time)
The problem is in the line
MPI_Isend(a_small, m_small*m_small2, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD, &send_req);
MPI_Isend is non-blocking send (which you pair with blocking MPI_Recv), thus when it returns it still may use a_small until you wait for the send to complete (when you are free to use a_small again) using, e.g., MPI_Wait(&send_req, MPI_STATUS_IGNORE);. So, you then delete a_small while it may still be in use by non-blocking message sending code, which likely causes access of deleted memory, which can lead to segfault and crash. Try using blocked send like this:
MPI_Send(a_small, m_small*m_small2, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD);
This will return when a_small can be reused (including by deletion), though data may still not be recieved by recievers by that time, but rather held in an internal temporary buffer.
So let's say that I have created a main-worker program with the following steps:
1 - main sends tasks to all workers
2 - faster workers accomplish tasks and send the results back to the main
3 - main receives the results from the fastest workers and sends new tasks to everyone
4 - faster workers are ready to receive the task, but slower workers have to interrupt or cancel the old, slow, task that they were doing, in order to start the new task at the same time as the faster worker
I know how to do all the steps, except for step 4, where I would have to interrupt what the slower workers are doing in order to proceed to the next task.
Here is an example of an incomplete code that is missing that part:
#include <mpi.h>
#include <iostream>
#include <string>
#include <unistd.h>
using namespace std;
int main(int argc, char* argv[])
{
MPI_Init(&argc,&argv);
int rank;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
int world_size;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
if (rank == 0) {
int value = 17;
string valuec = "Hello";
for(int i = 1; i < world_size; i++){
int result = MPI_Send(valuec.c_str(), valuec.length(), MPI_CHAR, i, 0, MPI_COMM_WORLD);
if (result == MPI_SUCCESS)
std::cout << "Rank 0 OK!" << std::endl;
}
int workersDone = 0;
MPI_Status status;
int flag = 0;
while(1){
flag=0;
MPI_Iprobe(MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &flag, &status);
if(flag==1){
workersDone++;
cout << "Workers done: " << workersDone << endl;
}
if(workersDone >= world_size/2){/* here the main moves on
before all workers are done
*/
cout << "Main breaking" << endl;
break;
}
}
/* interruption Here:
How do I make the main tell to the slow workers
interrupt or cancel what they were doing in order
to receive new tasks
*/
// New tasks should go here here
} else if (rank != 0) {
int receivedMessages = 0;
while(1){
MPI_Status status;
int flag = 0;
MPI_Iprobe(MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &flag, &status);
if(flag==1){
receivedMessages++;
int value;
char buffer[256];
int result = MPI_Recv(&buffer, 256, MPI_CHAR, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
cout << rank << " received " << buffer << endl;
sleep(rank); /* this hypothetical task will be slower
in some workers, faster in others. In the
final version of code of course this
will not be a sleep command, and the time
it takes will not be proportional to the
process rank.
*/
MPI_Send(buffer, sizeof(buffer), MPI_CHAR, 0, 0, MPI_COMM_WORLD);
cout << rank << " breaking" << endl;
break;
}
}
}
MPI_Finalize();
return 0;
}
I am trying to develop a parallel random walker simulation with MPI and C++.
In my simulation, each process can be thought of as a cell which can contain particles (random walkers). The cells are aligned in one dimension with periodic boundary conditions (i.e. ring topology).
In each time step, a particle can stay in its cell or go into the left or right neighbour cell with a certain probability. To make it a bit easier, only the last particle in each cell's list can walk. If the particle walks, it has to be sent to the process with the according rank (MPI_Isend + MPI_Probe + MPI_Recv + MPI_Waitall).
However, after the first step my particles start disappearing, i.e. the messages are getting 'lost' somehow.
Below is a minimal example (sorry if it's still rather long). To better track the particle movements, each particle has an ID which corresponds to the rank of the process in which it started. After each step, each cell prints the IDs of the particles stored in it.
#include <mpi.h>
#include <vector>
#include <iostream>
#include <random>
#include <string>
#include <sstream>
#include <chrono>
#include <algorithm>
using namespace std;
class Particle
{
public:
int ID; // this is the rank of the process which initialized the particle
Particle () : ID(0) {};
Particle (int ID) : ID(ID) {};
};
stringstream msg;
string msgString;
int main(int argc, char** argv)
{
// Initialize the MPI environment
MPI_Init(NULL, NULL);
// Get the number of processes
int world_size;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
// Get the rank of the process
int world_rank;
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
// communication declarations
MPI_Status status;
// get the ranks of neighbors (periodic boundary conditions)
int neighbors[2];
neighbors[0] = (world_size + world_rank - 1) % world_size; // left neighbor
neighbors[1] = (world_size + world_rank + 1) % world_size; // right neighbor
// declare particle type
MPI_Datatype type_particle;
MPI_Type_contiguous (1, MPI_INT, &type_particle);
MPI_Type_commit (&type_particle);
// every process inits 1 particle with ID = world_rank
vector<Particle> particles;
particles.push_back (Particle(world_rank));
// obtain a seed from the timer
typedef std::chrono::high_resolution_clock myclock;
myclock::time_point beginning = myclock::now();
myclock::duration d = myclock::now() - beginning;
unsigned seed2 = d.count();
default_random_engine generator (seed2);
uniform_real_distribution<double> distribution (0, 1);
// ------------------------------------------------------------------
// begin time loop
//-------------------------------------------------------------------
for (int t=0; t<10; t++)
{
// ------------------------------------------------------------------
// 1) write a message string containing the current list of particles
//-------------------------------------------------------------------
// write the rank and the particle IDs into the msgString
msg << "rank " << world_rank << ": ";
for (auto& i : particles)
{
msg << i.ID << " ";
}
msg << "\n";
msgString = msg.str();
msg.str (string()); msg.clear ();
// to print the messages in order, the messages are gathered by root (rank 0) and then printed
// first, gather nums to root
int num = msgString.size();
int rcounts[world_size];
MPI_Gather( &num, 1, MPI_INT, rcounts, 1, MPI_INT, 0, MPI_COMM_WORLD);
// root now has correct rcounts, using these we set displs[] so
// that data is placed contiguously (or concatenated) at receive end
int displs[world_size];
displs[0] = 0;
for (int i=1; i<world_size; ++i)
{
displs[i] = displs[i-1]+rcounts[i-1]*sizeof(char);
}
// create receive buffer
int rbuf_size = displs[world_size-1]+rcounts[world_size-1];
char *rbuf = new char[rbuf_size];
// gather the messages
MPI_Gatherv( &msgString[0], num, MPI_CHAR, rbuf, rcounts, displs, MPI_CHAR,
0, MPI_COMM_WORLD);
// root prints the messages
if (world_rank == 0)
{
cout << endl << "step " << t << endl;
for (int i=0; i<rbuf_size; i++)
cout << rbuf[i];
}
// ------------------------------------------------------------------
// 2) send particles randomly to neighbors
//-------------------------------------------------------------------
Particle snd_buf;
int sndDest = -1;
// 2a) if there are particles left, prepare a message. otherwise, proceed to step 2b)
if (!particles.empty ())
{
// write the last particle in the list to a buffer
snd_buf = particles.back ();
// flip a coin. with a probability of 50 %, the last particle in the list gets sent to a random neighbor
double rnd = distribution (generator);
if (rnd <= .5)
{
particles.pop_back ();
// pick random neighbor
if (rnd < .25)
{
sndDest = neighbors[0]; // send to the left
}
else
{
sndDest = neighbors[1]; // send to the right
}
}
}
// 2b) always send a message to each neighbor (even if it's empty)
MPI_Request requests[2];
for (int i=0; i<2; i++)
{
int dest = neighbors[i];
MPI_Isend (
&snd_buf, // void* data
sndDest==dest ? 1 : 0, // int count <---------------- send 0 particles to every neighbor except the one specified by sndDest
type_particle, // MPI_Datatype
dest, // int destination
0, // int tag
MPI_COMM_WORLD, // MPI_Comm
&requests[i]
);
}
// ------------------------------------------------------------------
// 3) probe and receive messages from each neighbor
//-------------------------------------------------------------------
for (int i=0; i<2; i++)
{
int src = neighbors[i];
// probe to determine if the message is empty or not
MPI_Probe (
src, // int source,
0, // int tag,
MPI_COMM_WORLD, // MPI_Comm comm,
&status // MPI_Status* status
);
int nRcvdParticles = 0;
MPI_Get_count (&status, type_particle, &nRcvdParticles);
// if the message if non-empty, receive it
if (nRcvdParticles > 0) // this proc can receive max. 1 particle from each neighbor
{
Particle rcv_buf;
MPI_Recv (
&rcv_buf, // void* data
1, // int count
type_particle, // MPI_Datatype
src, // int source
0, // int tag
MPI_COMM_WORLD, // MPI_Comm comm
MPI_STATUS_IGNORE // MPI_Status* status
);
// add received particle to the list
particles.push_back (rcv_buf);
}
}
MPI_Waitall (2, requests, MPI_STATUSES_IGNORE);
}
// ------------------------------------------------------------------
// end time loop
//-------------------------------------------------------------------
// Finalize the MPI environment.
MPI_Finalize();
if (world_rank == 0)
cout << "\nMPI_Finalize()\n";
return 0;
}
I ran the simulation with 8 processes and below is a sample of the output. In step 1, it still seems to work well, but beginning with step 2 the particles begin disappearing.
step 0
rank 0: 0
rank 1: 1
rank 2: 2
rank 3: 3
rank 4: 4
rank 5: 5
rank 6: 6
rank 7: 7
step 1
rank 0: 0
rank 1: 1
rank 2: 2 3
rank 3:
rank 4: 4 5
rank 5:
rank 6: 6 7
rank 7:
step 2
rank 0: 0
rank 1:
rank 2: 2
rank 3:
rank 4: 4
rank 5:
rank 6: 6 7
rank 7:
step 3
rank 0: 0
rank 1:
rank 2: 2
rank 3:
rank 4:
rank 5:
rank 6: 6
rank 7:
step 4
rank 0: 0
rank 1:
rank 2: 2
rank 3:
rank 4:
rank 5:
rank 6: 6
rank 7:
I have no ideas what's wrong with the code... Somehow, the combination MPI_Isend + MPI_Probe + MPI_Recv + MPI_Waitall seems not to work... Any help is really appreciated!
There is an error in your code. The following logic (irrelevant code and arguments omitted) is wrong:
MPI_Probe(..., &status);
MPI_Get_count (&status, type_particle, &nRcvdParticles);
// if the message if non-empty, receive it
if (nRcvdParticles > 0)
{
MPI_Recv();
}
MPI_Probe does not remove zero-sized messages from the message queue. The only MPI calls that do so is MPI_Recv and the combination of MPI_Irecv + MPI_Test/MPI_Wait. You must receive all messages, including zero-sized ones, otherwise they will prevent the reception of further messages with the same (source, tag) combination. Although reception of a zero-sized message writes nothing into the receive buffer, it removes the message envelope from the queue and the next matching message could be received.
Solution: move the call to MPI_Recv before the conditional operator.
I'm trying to scatter values among processes belonging to an hypercube group (quicksort project).
Depending on the amount of processes I either create a new communicator excluding excessive processes, or I duplicate MPI_COMM_WORLD if it fits exactly any hypercube (power of 2).
In both cases, processes other than 0 receive their data, but:
- On first scenario, process 0 throws a segmentation fault 11
- On second scenario, nothing faults, but process 0 received values are gibberish.
NOTE: If I try a regular MPI_Scatter everything works well.
//Input
vector<int> LoadFromFile();
int d; //dimension of hypercube
int p; //active processes
int idle; //idle processes
vector<int> values; //values loaded
int arraySize; //number of total values to distribute
int main(int argc, char* argv[])
{
int mpiWorldRank;
int mpiWorldSize;
int mpiRank;
int mpiSize;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &mpiWorldRank);
MPI_Comm_size(MPI_COMM_WORLD, &mpiWorldSize);
MPI_Comm MPI_COMM_HYPERCUBE;
d = log2(mpiWorldSize);
p = pow(2, d); //Number of processes belonging to the hypercube
idle = mpiWorldSize - p; //number of processes in excess
int toExclude[idle]; //array of idle processes to exclude from communicator
int sendCounts[p]; //array of values sizes to be sent to processes
//
int i = 0;
while (i < idle)
{
toExclude[i] = mpiWorldSize - 1 - i;
++i;
}
//CREATING HYPERCUBE GROUP: Group of size of power of 2 -----------------
MPI_Group world_group;
MPI_Comm_group(MPI_COMM_WORLD, &world_group);
// Remove excessive processors if any from communicator
if (idle > 0)
{
MPI_Group newGroup;
MPI_Group_excl(world_group, 1, toExclude, &newGroup);
MPI_Comm_create(MPI_COMM_WORLD, newGroup, &MPI_COMM_HYPERCUBE);
//Abort any processor not part of the hypercube.
if (mpiWorldRank > p)
{
cout << "aborting: " << mpiWorldRank <<endl;
MPI_Finalize();
return 0;
}
}
else
{
MPI_Comm_dup(MPI_COMM_WORLD, &MPI_COMM_HYPERCUBE);
}
MPI_Comm_rank(MPI_COMM_HYPERCUBE, &mpiRank);
MPI_Comm_size(MPI_COMM_HYPERCUBE, &mpiSize);
//END OF: CREATING HYPERCUBE GROUP --------------------------
if (mpiRank == 0)
{
//STEP1: Read input
values = LoadFromFile();
arraySize = values.size();
}
//Transforming input vector into an array
int valuesArray[values.size()];
if(mpiRank == 0)
{
copy(values.begin(), values.end(), valuesArray);
}
//Broadcast input size to all processes
MPI_Bcast(&arraySize, 1, MPI_INT, 0, MPI_COMM_HYPERCUBE);
//MPI_Scatterv: determining size of arrays to be received and displacement
int nmin = arraySize / p;
int remainingData = arraySize % p;
int displs[p];
int recvCount;
int k = 0;
for (i=0; i<p; i++)
{
sendCounts[i] = i < remainingData
? nmin+1
: nmin;
displs[i] = k;
k += sendCounts[i];
}
recvCount = sendCounts[mpiRank];
int recvValues[recvCount];
//Following MPI_Scatter works well:
// MPI_Scatter(&valuesArray, 13, MPI_INT, recvValues , 13, MPI_INT, 0, MPI_COMM_HYPERCUBE);
MPI_Scatterv(&valuesArray, sendCounts, displs, MPI_INT, recvValues , recvCount, MPI_INT, 0, MPI_COMM_HYPERCUBE);
int j = 0;
while (j < recvCount)
{
cout << "rank " << mpiRank << " received: " << recvValues[j] << endl;
++j;
}
MPI_Finalize();
return 0;
}
First of all, you are supplying wrong arguments to MPI_Group_excl:
MPI_Group_excl(world_group, 1, toExclude, &newGroup);
// ^
The second argument specifies the number of entries in the exclusion list and should therefore be equal to idle. Since you are excluding a single rank only, the resulting group has mpiWorldSize-1 ranks and hence MPI_Scatterv expects that both sendCounts[] and displs[] have that many elements. Of those only p elements are properly initialised and and the rest are random, therefore MPI_Scatterv crashes in the root.
Another error is the code that aborts the idle processes: it should read if (mpiWorldRank >= p).
I would recommend that the entire exclusion code is replaced by a single call to MPI_Comm_split instead:
MPI_Comm comm_hypercube;
int colour = mpiWorldRank >= p ? MPI_UNDEFINED : 0;
MPI_Comm_split(MPI_COMM_WORLD, colour, mpiWorldRank, &comm_hypercube);
if (comm_hypercube == MPI_COMM_NULL)
{
MPI_Finalize();
return 0;
}
When no process supplies MPI_UNDEFINED as its colour, the call is equivalent to MPI_Comm_dup.
Note that you should avoid using in your code names starting with MPI_ as those could clash with symbols from the MPI implementation.
Additional note: std::vector<T> uses contiguous storage, therefore you could do without copying the elements into a regular array and simply provide the address of the first element in the call to MPI_Scatter(v):
MPI_Scatterv(&values[0], ...);
I'm not sure that I am correctly understanding what MPI_Scatterv is supposed to do. I have 79 items to scatter amounts a variable amount of nodes. However, when I use the MPI_Scatterv command I get ridiculous numbers (as if the array elements of my receiving buffer are uninitialized). Here is the relevant code snippet:
MPI_Init(&argc, &argv);
int id, procs;
MPI_Comm_rank(MPI_COMM_WORLD, &id);
MPI_Comm_size(MPI_COMM_WORLD, &procs);
//Assign each file a number and figure out how many files should be
//assigned to each node
int file_numbers[files.size()];
int send_counts[nodes] = {0};
int displacements[nodes] = {0};
for (int i = 0; i < files.size(); i++)
{
file_numbers[i] = i;
send_counts[i%nodes]++;
}
//figure out the displacements
int sum = 0;
for (int i = 0; i < nodes; i++)
{
displacements[i] = sum;
sum += send_counts[i];
}
//Create a receiving buffer
int *rec_buf = new int[79];
if (id == 0)
{
MPI_Scatterv(&file_numbers, send_counts, displacements, MPI_INT, rec_buf, 79, MPI_INT, 0, MPI_COMM_WORLD);
}
cout << "got here " << id << " checkpoint 1" << endl;
cout << id << ": " << rec_buf[0] << endl;
cout << "got here " << id << " checkpoint 2" << endl;
MPI_Barrier(MPI_COMM_WORLD);
free(rec_buf);
MPI_Finalize();
When I run that code I receive this output:
got here 1 checkpoint 1
1: -1168572184
got here 1 checkpoint 2
got here 2 checkpoint 1
2: 804847848
got here 2 checkpoint 2
got here 3 checkpoint 1
3: 1364787432
got here 3 checkpoint 2
got here 4 checkpoint 1
4: 903413992
got here 4 checkpoint 2
got here 0 checkpoint 1
0: 0
got here 0 checkpoint 2
I read the documentation for OpenMPI and looked through some code examples, I'm not sure what I'm missing any help would be great!
One of the most common MPI mistakes strikes again:
if (id == 0) // <---- PROBLEM
{
MPI_Scatterv(&file_numbers, send_counts, displacements, MPI_INT,
rec_buf, 79, MPI_INT, 0, MPI_COMM_WORLD);
}
MPI_SCATTERV is a collective MPI operation. Collective operations must be executed by all processes in the specified communicator in order to complete successfully. You are executing it only in rank 0 and that's why only it gets the correct values.
Solution: remove the conditional if (...).
But there is another subtle mistake here. Since collective operations do not provide any status output, the MPI standard enforces strict matching of the number of elements sent to some rank and the number of elements the rank is willing to receive. In your case the receiver always specifies 79 elements which might not match the corresponding number in send_counts. You should instead use:
MPI_Scatterv(file_numbers, send_counts, displacements, MPI_INT,
rec_buf, send_counts[id], MPI_INT,
0, MPI_COMM_WORLD);
Also note the following discrepancy in your code that might as well be a typo while posting the question here:
MPI_Comm_size(MPI_COMM_WORLD, &procs);
^^^^^
int send_counts[nodes] = {0};
^^^^^
int displacements[nodes] = {0};
^^^^^
While you obtain the number of ranks in the procs variable, nodes is used in the rest of your code. I guess nodes should be replaced by procs.