Seg fault while using MPI_Scatter - c++

I have problem with MPI_Scatter. Dont know hot to use it and my current program crashes with seg fault when I launch.
I guess that the problem in parameters of MPI_Scatter, particularly in calling it with right operator (& or * or void), but I've tried almost every combination and nothing actually helped.
#include <iostream>
#include <stdio.h>
#include <mpi.h>
// k = 3, N = 12, 1,2,3, 4,5,6, 7,8,9, 10,11,12
int main(int argc, char **argv) {
int N, size, myrank;
int k;
std::cin >> N;
std::cin >> k;
int *mass = new int[N];
int *recv = new int[k];
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
if (myrank == 0) {
std::cout << "get k and n \n";
for (int i = 0; i < N; ++i) {
mass[i] = i;
std::cout << i << " written\n";
}
}
MPI_Scatter(mass, k, MPI_INT, recv, k, MPI_INT, 0, MPI_COMM_WORLD);
int sum = 0;
std::cout << "myrank" << myrank << '\n';
for (int i = 0; i < k; ++i) {
std::cout << recv[i] << '\n';
}
MPI_Finalize();
return 0;
}
When I launch this code, it prints this:
N = 12
k = 3
get k and n
0 written
1 written
2 written
3 written
4 written
5 written
6 written
7 written
8 written
9 written
10 written
11 written
myrank0
0
1
2
myrank1
myrank3
myrank2
[1570583203.522390] [calc:32739:0] mpool.c:38 UCX WARN object 0x7fe1f08b2f60 was not returned to mpool mm_recv_desc
[1570583203.523214] [calc:32740:0] mpool.c:38 UCX WARN object 0x7f4643986f60 was not returned to mpool mm_recv_desc
[1570583203.524205] [calc:32741:0] mpool.c:38 UCX WARN object 0x7f22535d4f60 was not returned to mpool mm_recv_desc

MPI typically redirects stdout to rank 0, so N and k are not correctly set on the other ranks.
Here is a working version of your program
#include <iostream>
#include <cassert>
#include <stdio.h>
#include <mpi.h>
// k = 3, N = 12, 1,2,3, 4,5,6, 7,8,9, 10,11,12
int main(int argc, char **argv) {
int k, N, size, myrank;
int *mass;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
if (myrank == 0) {
std::cout << "get k and n \n";
std::cin >> N;
std::cin >> k;
assert (N >= k*size);
mass = new int[N];
for (int i = 0; i < N; ++i) {
mass[i] = i;
std::cout << i << " written\n";
}
}
MPI_Bcast(&k, 1, MPI_INT, 0, MPI_COMM_WORLD);
int *recv = new int[k];
MPI_Scatter(mass, k, MPI_INT, recv, k, MPI_INT, 0, MPI_COMM_WORLD);
int sum = 0;
std::cout << "myrank" << myrank << '\n';
for (int i = 0; i < k; ++i) {
std::cout << recv[i] << '\n';
}
MPI_Finalize();
return 0;
}

Related

Why Warning "Using uninitialized memory '*unique_counts'" and how fix it?

I am trying to solve problem: Find all unique elements of a two-dimensional array of integers using the MPI Scatter function for array scattering.
How fix it?
Severity Code Description Project File Line Suppression State Detail Description
Warning C6001 Using uninitialized memory '*unique_counts'. ConsoleApplication15 C:\Users\netd3en\source\repos\ConsoleApplication15\ConsoleApplication15\ConsoleApplication15.cpp 29
#include <iostream>
#include <unordered_set>
#include <mpi.h>
int main(int argc, char* argv[]) {
int size, rank;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
int arr[3][3] = { {1, 2, 3}, {4, 5, 6}, {7, 8, 9} };
int* local_arr = new int[3];
MPI_Scatter(arr, 3, MPI_INT, local_arr, 3, MPI_INT, 0, MPI_COMM_WORLD);
std::unordered_set<int> unique_elements;
for (int i = 0; i < 3; i++) {
unique_elements.insert(local_arr[i]);
}
int* unique_counts = new int[size];
int local_unique_count = unique_elements.size();
MPI_Gather(&local_unique_count, 1, MPI_INT, unique_counts, 1, MPI_INT, 0, MPI_COMM_WORLD);
if (rank == 0) {
std::unordered_set<int> all_unique_elements;
int offset = 0;
for (int i = 0; i < size; i++) {
for (int j = 0; j < unique_counts[i]; j++) {
all_unique_elements.insert(local_arr[offset + j]);
}
offset += unique_counts[i];
}
std::cout << "Unique elements:";
for (auto it = all_unique_elements.begin(); it != all_unique_elements.end(); it++) {
std::cout << " " << *it;
}
std::cout << std::endl;
}
MPI_Finalize();
return 0;
}
You don't initialize allocated int array pointed by unique_counts and MPI_Gather can fail (you don't check a returned value) and leave the array elements unchanged. Try
int* unique_counts = new int[size]{};
^^
Don't forget delete[] unique_counts;

BubbleSort in c++ using MPI

I am a beginner in MPI and am trying to write sort code(BubbleSort)
The code works, but it seems like I'm missing something
Code is here:--->
#define N 10`
#include <iostream>
#include <stdio.h>
#include <math.h>
#include <time.h>
#include <stdlib.h>
#include <stddef.h>
#include "mpi.h"
using namespace std;
int main(int argc, char* argv[])
{
int i, j, k, rank, size;
int a[N] = { 10,9,8,7,6,5,4,3,2,1 };
int c[N];
int aa[N], cc[N];
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Scatter(a, N/size, MPI_INT, aa, N/size , MPI_INT, 0, MPI_COMM_WORLD);
MPI_Barrier(MPI_COMM_WORLD);
int n = N/size;
for (int i = 0; i < n - 1; i++) {
for (int j = 0; j < n - i - 1; j++) {
if (aa[j] > aa[j + 1]) {
int temp = aa[j];
aa[j] = aa[j + 1];
aa[j + 1] = temp;
}
}
}
for (int i = 0; i < n; i++) {
cc[i] = aa[i];
};
MPI_Barrier(MPI_COMM_WORLD);
MPI_Gather(cc, N/size , MPI_INT, c, N/size, MPI_INT, 0, MPI_COMM_WORLD);
MPI_Barrier(MPI_COMM_WORLD);
MPI_Finalize();
cout << cc[9];
if (rank == 0) {
cout << "C is look like : " << endl;
for (int i = 0; i < N; i++) {
cout << c[i] << " ";
}
}
}
Output of the program:-->
In the end we get errors
In general, my MPI is configured as 4 processors
-858993460 C is look like :
-858993460
-858993460
-858993460
9 10 7 8 5 6 3 4 -858993460 -858993460
There are several issues in your program :
cc[9] is used uninitialized
you only operate on (N/size)*size) elements, and in your case N=10, size=4, it means you operate on only 8 elements. The cure is to use MPI_Scatterv() and MPI_Gatherv()
assuming your bubble sort is correct (I did not check that part), your program gathers sorted (sub)arrays, and you cannot naively expect the outcome is a (full size) sorted array.

How to send and receive variable length (processor dependent) std::vector<myStruct> using MPI

I have a vector of custom structs living on each processor. These vectors are all different sizes. I am struggling to receive the data from processors 1 through N on processor 0.
I have attempted to implement the advice on pages 31-33 of the attached slide show, where std::vector are sent and received. I have also implemented the MPI_register and MPI_deregister.
I'm compiling and running this code on osx.
https://www.sharcnet.ca/help/images/f/fa/MPI_Datatypes_with_Opaque_Types_seminar_2014.pdf
#include <mpi.h>
#include <vector>
#include <iostream>
#include <string>
#include <algorithm>
#include <cstddef>
struct crab_claw
{
crab_claw() : com(), number(), weight(), crossing_count(), com0_id() {}
/////////
crab_claw(
std::vector<double> c,
int n,
double w,
std::vector<double> cc,
int id
) : com( c ), number( n ), weight( w ), crossing_count( cc ), com0_id( id ) {}
std::vector<double> com = std::vector<double>(3);
int number;
double weight;
std::vector<double> crossing_count = std::vector<double>(3);
int com0_id;
};
MPI_Datatype register_mpi_type(crab_claw)// const&)
{
constexpr std::size_t num_members = 5;
int lengths[num_members] = {3,1,1,3,1};
MPI_Aint offsets[num_members] = {
offsetof(crab_claw, com), //vector (of doubles)
offsetof(crab_claw, number), //int
offsetof(crab_claw, weight), //double
offsetof(crab_claw, crossing_count), //vector (of doubles)
offsetof(crab_claw, com0_id) //int
};
MPI_Datatype types[num_members] = { MPI_DOUBLE, MPI_INT, MPI_DOUBLE, MPI_DOUBLE, MPI_INT };
MPI_Datatype type;
MPI_Type_struct(num_members, lengths, offsets, types, &type);
MPI_Type_commit(&type);
return type;
}
void deregister_mpi_type(MPI_Datatype type)
{
MPI_Type_free(&type);
}
int main(
int argc,
char* argv[]
)
{
MPI_Init(&argc, &argv);
int mpi_nprocessors; MPI_Comm_size(MPI_COMM_WORLD, &mpi_nprocessors);
int mpi_my_id; MPI_Comm_rank(MPI_COMM_WORLD, &mpi_my_id);
const int mpi_master_id = 0;
std::vector<crab_claw> h; double j = mpi_my_id;
for(int i = 0; i < (2*mpi_my_id+1); ++i)
{
crab_claw h_1;
std::vector<double> h_com(3,0.);
h_com[0] = ((i+1)/20. * 2*j)/(i+1);
h_com[1] = ((i+1)/20. * 2*j)/(i+1);
h_com[2] = ((i+1)/20. * 2*j)/(i+1);
j /= 0.5;
std::vector<double> crossing_count(3,0.);
h_1.com = h_com;
h_1.number = i*mpi_my_id+1;
h_1.weight = j*0.3;
h_1.crossing_count = crossing_count;
h_1.com0_id = mpi_my_id;
h.push_back(h_1);
}
MPI_Barrier(MPI_COMM_WORLD);
/* create a type for struct crab_claw */
std::vector<crab_claw> storage;
MPI_Datatype type = register_mpi_type(h[0]);
if (mpi_my_id != mpi_master_id)
{
int tag = mpi_my_id;
unsigned length = h.size();
const int destination = mpi_master_id;
MPI_Send(
&length,
1,
MPI_UNSIGNED,
destination,
tag+mpi_nprocessors,
MPI_COMM_WORLD
);
if(length != 0)
{
MPI_Send(
h.data(),
length,
type,
destination,
mpi_my_id,
MPI_COMM_WORLD
);
}
}
MPI_Barrier(MPI_COMM_WORLD);
if (mpi_my_id == mpi_master_id)
{
for(int j = 0; j < mpi_nprocessors; ++j)
{
if(j == 0)
{
storage.insert(storage.end(), h.begin(), h.end());
std::cout << "inert insert" << '\n';
}
if(j > 0)
{
unsigned length;
MPI_Status s;
MPI_Recv(
&length,
1,
MPI_UNSIGNED,
j,
j+mpi_nprocessors,
MPI_COMM_WORLD,
&s
);
std::vector<crab_claw> rec;
//std::cout << "MPIMYID " << mpi_my_id << " LENGTH OF RECEIVED OBJ " << length << " j " << j << '\n';
if (length != 0)
{
h.resize(length);
MPI_Recv(
h.data(),
length,
type,
j,
j,
MPI_COMM_WORLD,
&s
);
std::cout << "SIZE() " << h.size() << " MY MPI ID " << mpi_my_id << " h[0].number " << h[0].weight << '\n';
//storage.insert(storage.end(), h.begin(), h.end());
} else
{
h.clear();
}
}
}
}
//std::cout << mpi_my_id << '\n';
MPI_Finalize();
return 0;
}
This is the error I receive. I think it's saying that the data from h sent by anyone but the master processor is being freed without having been allocated?
parallel(32428,0x7fff7d3e6000) malloc: * error for object 0x7fcb2bf07d10: pointer being freed was not allocated
* set a breakpoint in malloc_error_break to debug

MPI - How to partition and communicate my array portions between master and worker processes

I am having a problem executing my master/worker MPI program.
The goal is to have the master pass portions of the integer array to the workers, have the workers sort their portions, and then return array portion to the master process which then combines the portions into finalArray[].
I think it has something to do with how I'm passing the portions of the array between processes, but I can't seem to think of anything new to try.
My code:
int compare(const void * a, const void * b) // used for quick sort method
{
if (*(int*)a < *(int*)b) return -1;
if (*(int*)a > *(int*)b) return 1;
return 0;
}
const int arraySize = 10000;
int main(int argc, char ** argv)
{
int rank;
int numProcesses;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &numProcesses);
const int PART = floor(arraySize / (numProcesses - 1));
auto start = std::chrono::high_resolution_clock::now(); //start timer
//================================= MASTER PROCESS =================================
if (rank == 0)
{
int bigArray[arraySize];
int finalArray[arraySize];
for (int i = 0; i < arraySize; i++) //random number generator
{
bigArray[i] = rand();
}
for (int i = 0; i < numProcesses - 1; i++)
{
MPI_Send(&bigArray, PART, MPI_INT, i + 1, 0, MPI_COMM_WORLD); // send elements of the array
}
for (int i = 0; i < numProcesses - 1; i++)
{
std::unique_ptr<int[]> tmpArray(new int[PART]);
MPI_Recv(&tmpArray, PART, MPI_INT, i + 1, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE); //recieve sorted array from workers
for (int k = 0; k < PART; k++)
{
finalArray[PART * i + k] = tmpArray[k];
}
}
for (int m = 0; m < arraySize; m++)
{
printf(" Sorted Array: %d \n", finalArray[m]); //print my sorted array
}
}
//================================ WORKER PROCESSES ===============================
if (rank != 0)
{
std::unique_ptr<int[]> tmpArray(new int[PART]);
MPI_Recv(&tmpArray, PART, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE); //recieve data into local initalized array
qsort(&tmpArray, PART, sizeof(int), compare); // quick sort
MPI_Send(&tmpArray, PART, MPI_INT, 0, 0, MPI_COMM_WORLD); //send sorted array back to rank 0
}
MPI_Barrier(MPI_COMM_WORLD);
auto end = std::chrono::high_resolution_clock::now(); //end timer
std::cout << "process took: "
<< std::chrono::duration_cast<std::chrono::nanoseconds>(end - start).count() //prints timer
<< " nanoseconds\n ";
MPI_Finalize();
return 0;
}
I am fairly new to MPI and C++ so any advice on either subject related to this problem is extremely helpful. I realize there may be many problems with this code so thank you for all help in advance.

MPI_Bcast one of proces does not recive

I have a problem with mpi_Bcast. I want to sent an array of calculate how many numbers are per process to another process and the random process don't recive anything and crash(the process rank 2 and last-1).The number per process can be differ aboout 1. Can anybody help my ?
#include <stdio.h> // printf
#include <mpi.h>
#include <stdlib.h>
#include <time.h>
#include <iostream>
#include "EasyBMP.h"
using namespace std;
int main(int argc, char **argv) {
MPI_Init(&argc, &argv);
int rank, size;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
int* per_process = new int[size];
for (int i = 0; i < size; i++){
per_process[i] = 0;
}
if (rank == 0){
for (int i = 0; i < size; i++){
int default_for_process = 12 / size;
int rest = 12 % size;
if (i < rest){
default_for_process++;
}
per_process[i] = default_for_process;
}
}
MPI_Bcast(&per_process, size, MPI_INT, 0, MPI_COMM_WORLD);
for (int i = 0; i < size; i++){
cout <<rank<<" "<< per_process[i];
}
cout << endl;
MPI_Finalize();
return 0;
}