I am writing a code, where each processor must interact with multiple processors.
Ex: I have 12 processors, so Processor 0 has to communicate to say 1,2,10 and 9. Lets call them as neighbours of Processor 0. Similarly I have
Processor 1 has to communicate to say 5 ,3.
Processor 2 has to communicate to 5,1,0,10,11
and so on.
The flow of data is 2 ways, i.e Processor 0 must send data to 1,2,10 and 9 and also receive data from them.
Also, there is no problem in Tag calculation.
I have created a code which works like this:
for(all neighbours)
{
store data in vector<double> x;
MPI_Send(x)
}
MPI_BARRIER();
for(all neighbours)
{
MPI_Recv(x);
do work with x
}
Now I testing this algorithm for different size of x and different arrangement of neighbours. The code works for some, but doesnot work for others, it simply resorts to deadlock.
I have also tried:
for(all neighbours)
{
store data in vector<double> x;
MPI_ISend(x)
}
MPI_Test();
for(all neighbours)
{
MPI_Recv(x);
do work with x
}
The result is same, although the deadlock is replcaed by NaN in result, as MPI_Test() tells me that some of the MPI_Isend() operation are not complete and it jumps immediately to MPI_Recv().
Can anyone guide me in this matter, what am I dong wrong? Or is my fundamental approach itself is incorrect?
EDIT: I am attaching code snippet for better understanding of the problem. I am basically workin on parallelizing an unstructured 3D-CFD solver
I have attached one of the files, with some explanation. I am not broadcasting, I am looping over the neighbours of the parent processor to send the data across the interface( this can be defined as a boundary between two interfaces) .
So, If say I have 12 processors, and say Processor 0 has to communicate to say 1,2,10 and 9. So 0 is the parent processor and 1,2,10 and 9 are its neighbours.
As the file was too long and a part of the solver, to make things simple, I have only kept the MPI function in it.
void Reader::MPI_InitializeInterface_Values() {
double nbr_interface_id;
Interface *interface;
MPI_Status status;
MPI_Request send_request, recv_request;
int err, flag;
int err2;
char buffer[MPI_MAX_ERROR_STRING];
int len;
int count;
for (int zone_no = 0; zone_no<this->GetNumberOfZones(); zone_no++) { // Number of zone per processor is 1, so basically each zone is an independent processor
UnstructuredGrid *zone = this->ZoneList[zone_no];
int no_of_interface = zone->GetNumberOfInterfaces();
// int count;
long int count_send = 0;
long int count_recv = 0;
long int max_size = 10000; // can be set from test case later
int max_size2 = 199;
int proc_no = FlowSolution::processor_number;
for (int interface_no = 0; interface_no < no_of_interface; interface_no++) { // interface is defined as a boundary between two zones
interface = zone->GetInterface(interface_no);
int no_faces = interface->GetNumberOfFaces();
if (no_faces != 0) {
std::vector< double > Variable_send; // The vector which stores the data to be sent across the interface
std::vector< double > Variable_recieve;
int total_size = FlowSolution::VariableOrder.size() * no_faces;
Variable_send.resize(total_size);
Variable_recieve.resize(total_size);
int nbr_proc_no = zone->GetInterface(interface_no)->GetNeighborZoneId(); // neighbour of parent processor
int j = 0;
nbr_interface_id = interface->GetShared_Interface_ID();
for (std::map<VARIABLE, int>::iterator iterator = FlowSolution::VariableOrder.begin(); iterator != FlowSolution::VariableOrder.end(); iterator++) {
for (int face_no = 0; face_no < no_faces; face_no++) {
Face *face = interface->GetFace(face_no);
int owner_id = face->Getinterface_Original_face_owner_id();
double value_send = zone->GetInterface(interface_no)->GetFace(face_no)->GetCell(owner_id)->GetPresentFlowSolution()->GetVariableValue((*iterator).first);
Variable_send[j] = value_send;
j++;
}
}
count_send = nbr_proc_no * max_size + nbr_interface_id; // tag for data to be sent
err2 = MPI_Isend(&Variable_send.front(), total_size, MPI_DOUBLE, nbr_proc_no, count_send, MPI_COMM_WORLD, &send_request);
}// end of sending
} // all the processors have sent data to their corresponding neighbours
MPI_Barrier(MPI_COMM_WORLD);
for (int interface_no = 0; interface_no < no_of_interface; interface_no++) { // loop over of neighbours of the current processor to receive data
interface = zone->GetInterface(interface_no);
int no_faces = interface->GetNumberOfFaces();
if (no_faces != 0) {
std::vector< double > Variable_recieve; // The vector which collects the data sent across the interface from
int total_size = FlowSolution::VariableOrder.size() * no_faces;
Variable_recieve.resize(total_size);
count_recv = proc_no * max_size + interface_no; // tag to receive data
int nbr_proc_no = zone->GetInterface(interface_no)->GetNeighborZoneId();
nbr_interface_id = interface->GetShared_Interface_ID();
MPI_Irecv(&Variable_recieve.front(), total_size, MPI_DOUBLE, nbr_proc_no, count_recv, MPI_COMM_WORLD, &recv_request);
/* Now some work is done using received data */
int j = 0;
for (std::map<VARIABLE, int>::iterator iterator = FlowSolution::VariableOrder.begin(); iterator != FlowSolution::VariableOrder.end(); iterator++) {
for (int face_no = 0; face_no < no_faces; face_no++) {
double value_recieve = Variable_recieve[j];
j++;
Face *face = interface->GetFace(face_no);
int owner_id = face->Getinterface_Original_face_owner_id();
interface->GetFictitiousCell(face_no)->GetPresentFlowSolution()->SetVariableValue((*iterator).first, value_recieve);
double value1 = face->GetCell(owner_id)->GetPresentFlowSolution()->GetVariableValue((*iterator).first);
double face_value = 0.5 * (value1 + value_recieve);
interface->GetFace(face_no)->GetPresentFlowSolution()->SetVariableValue((*iterator).first, face_value);
}
}
// Variable_recieve.clear();
}
}// end of receiving
}
Working from the problem statement:
Processor 0 has to send to 1, 2, 9 and 10, and receive from them.
Processor 1 has to send to 5 and 3, and receive from them.
Processor 2 has to send to 0, 1, 5, 10 and 11, and receive from them.
There are 12 total processors.
You can make life easier if you just run a 12-step program:
Step 1: Processor 0 sends, others receive as needed, then the converse occurs.
Step 2: Processor 1 sends, others receive as needed, then the converse occurs.
...
Step 12: Profit - there's nothing left to do (because every other processor has already interacted with Processor 11).
Each step can be implemented as an MPI_Scatterv (some sendcounts will be zero), followed by an MPI_Gatherv. 22 total calls and you're done.
There may be several possible reasons for a deadlock, so you have to be more specific, e. g. standard says: "When standard send operations are used, then a deadlock situation may occur where both processes are blocked because buffer space is not available."
You should use both Isend and Irecv. The general structure should be:
MPI_Request req[n];
MPI_Irecv(..., req[0]);
// ...
MPI_Irecv(..., req[n-1]);
MPI_Isend(..., req[0]);
// ...
MPI_Isend(..., req[n-1]);
MPI_Waitall(n, req, MPI_STATUSES_IGNORE);
By using AllGatherV, the problem can be solved. All I did was made the send count such that the send count only had the processors that I wanted to communicate with. Other processors had 0 send count.
This solved my problem
Thank you everyone for your answers!
Related
In my application, I have two threads, a producer (thread 1) and a consumer (thread 2). Each thread has an input and output interface (effectively a pointer to a list) that is connected to a third thread which serves as a router.
When the producer writes, it calls memcpy to copy data into a buffer and pushes the buffer into a list. Meanwhile, the router thread is round-robin searching through all the threads that are connected to it and monitoring their interfaces to see if any thread has data to send out. When it sees that thread 1's list is non-empty, it checks to determine which thread the data is intended for. The data is spliced into the destination thread's (in this case thread 2) input list, at which point thread 2 will malloc some memory, memcpy the data into it and return the pointer to this new region.
For my test, I'm measuring throughput to see how long it takes to send 100k messages of varying sizes. Thread 1 sends data of some size, thread 2 reads it and sends back a small reply message, which thread 1 reads. This would be one complete exchange. In the first test, in thread 1, I'm sending all 100k messages, and then reading 100k replies. In the second test, in thread 1, I'm alternating sending a message and waiting for the reply and repeating 100k times. In both tests, thread 2 is in a loop reading the message and sending a reply. I would expect test 1 to have higher throughput because the threads should spend less time waiting around. However, it has markedly worse throughput than test 2. I've measured how long individual function calls (to read/write) take in the two test cases and they invariably take longer in test 1 (based on the means and medians and no delay) though the numbers are of the same order of magnitude.
When I add a loop doing nothing into thread 1's sending loop in test 1, I see dramatically improved throughput for this case as opposed to not having the delay. My only guess is that adding a delay slows down the producer so the consumer can absorb the data which prevents its input list from growing very large. I'm wondering if there may be other explanations and if so, how I can test for them.
Edit
Unfortunately, my own code is just the test I described above which calls a library that actually performs the reads/writes, creates that third thread etc. It's difficult to make a minimal example out of it because the library is complex and not mine. I provide some pseudocode to illustrate the setup in more detail.
int NUM_ITERATIONS = 100000;
int msg_reply = 2; // size of the reply message in words
int msg_size = 512; // indicates 512 64 bit words
void generate(int iterations, int size, interface* out){
std::vector<long long> vec(size);
for(int i = 0; i < size; i++)
vec[i] = (long long) i;
for(int i = 0; i < iterations; i++)
out->lib_write((char*) vec.data(), size);
}
void receive(int iterations, int size, interface* in){
for(int i = 0; i < iterations; i++)
char* data = in->lib_read(size)
void producer(interface* in, interface* out){
// test 1
start = std::chrono::high_resolution_clock::now();
// write data of size msg_size, NUM_ITERATIONS times to out
generate(NUM_ITERATIONS, msg_size, out);
// read data of size msg_reply, NUM_ITERATIONS times from in
receive(NUM_ITERATIONS, msg_reply, in);
end = std::chrono::high_resolution_clock::now();
// using NUM_ITERATIONS, msg_size and time, compute and print throughput to stdout
print_throughput(end-start, "throughput_0", msg_size);
// test 2
start = std::chrono::high_resolution_clock::now();
for(int j = 0; j < NUM_ITERATIONS; j++){
generate(1, msg_size, out);
receive(1, msg_reply, in);
}
end = std::chrono::high_resolution_clock::now();
print_throughput(end-start, "throughput_1", msg_size);
}
void consumer(interface* in, interface* out){
for(int i = 0; i < 2; i++}{
for(int j = 0; j < NUM_ITERATIONS; j++){
receive(1, msg_size, in);
generate(1, msg_reply, out);
}
}
}
The calls to lib_write() and lib_read() become fairly complex. To elaborate on the description above, the data gets memcpy'd into a buffer and then moved into a list. The interface has a condition variable member and the write calls its notify_one() method. The third thread is looping through all the interface pointers it has and checking to see if their lists are non-empty. If so, the data is spliced from one output list to the destination's input list using the splice() method in std::list. Meanwhile, the consumer calls the lib_read() which waits on the condition variable while the interface is empty, and then memcpy's the data into a new region and returns it.
// note: these will not compile as is. Undefined variables are class members
char * interface::lib_read(size_t * _size){
char * ret;
{
std::unique_lock<std::mutex> lock(mutex);
// packets is an std::list containing the incoming data
while (packets.empty()) {
cv.wait(lock);
}
curr_read_it = packets.begin();
}
size_t buff_size = curr_read_it->size;
ret = (char *)malloc(buff_size);
memcpy((char *)ret, (char *)curr_read_it->data, buff_size);
{
std::unique_lock<std::mutex> lock(mutex);
packets.erase(curr_read_it);
curr_read_it = packets.end();
}
return ret;
}
void interface::lib_write(char * data, int size){
// indicates the destination thread id
long long header = 1;
// buffer is a just an array that's max packet sized
memcpy((char *)buffer.data, &header, sizeof(long long));
memcpy((char *)buffer.data + sizeof(long long), (char *)data, size * sizeof(long long));
std::lock_guard<std::mutex> guard(mutex);
packets.push_back(std::move(buffer));
cv.notify_one();
}
// this is on thread 3
void route(){
do{
// this is a vector containing all the "out" interfaces
for(int i = 0; i < out_ptrs.size(); i++){
interface <long long> * _out = out_ptrs[i];
if(!_out->empty()){
// this just returns the header id (also locks the mutex)
long long dest= _out->get_dest();
// looks up the correct interface based on the id and splices
// a packet into from _out to the appropriate one. Locks mutex
in_ptrs[dest_map[dest]]->splice(_out);
}
}
}while(!done());
I was looking for general advice on what factors may influence multithreading performance and what to test for in order to better understand what was going on.
I talked to some other people and the advice I got that was helpful was to determine if the OS scheduling was the issue (which is what I suspected but was unsure how to test). Essentially, I used taskset and sched_affinity() to force the application to run on one core or on a subset of cores and looked at how they compared to each other and to the unrestricted case.
Based on the restrictions, I got dramatically different results and could see some trends so I'm pretty confident in saying that it's an OS scheduling issue. Different ones can yield better performance under different workloads.
I am new to MPI. I want to send three ints to three slave nodes to create dynamic arrays, and each arrays will be send back to master. According to this post, I modified the code, and it's close to the right answer. But I got breakpoint when received array from slave #3 (m ==3) in receiver code. Thank you in advance!
My code is as follow:
#include <mpi.h>
#include <iostream>
#include <stdlib.h>
int main(int argc, char** argv)
{
int firstBreakPt, lateralBreakPt;
//int reMatNum1, reMatNum2;
int tmpN;
int breakPt[3][2]={{3,5},{6,9},{4,7}};
int myid, numprocs;
MPI_Status status;
// double *reMat1;
// double *reMat2;
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD,&myid);
MPI_Comm_size(MPI_COMM_WORLD,&numprocs);
tmpN = 15;
if (myid==0)
{
// send three parameters to slaves;
for (int i=1;i<numprocs;i++)
{
MPI_Send(&tmpN,1,MPI_INT,i,0,MPI_COMM_WORLD);
firstBreakPt = breakPt[i-1][0];
lateralBreakPt = breakPt[i-1][1];
//std::cout<<i<<" "<<breakPt[i-1][0] <<" "<<breakPt[i-1][1]<<std::endl;
MPI_Send(&firstBreakPt,1,MPI_INT,i,1,MPI_COMM_WORLD);
MPI_Send(&lateralBreakPt,1,MPI_INT,i,2,MPI_COMM_WORLD);
}
// receive arrays from slaves;
for (int m =1; m<numprocs; m++)
{
MPI_Probe(m, 3, MPI_COMM_WORLD, &status);
int nElems3, nElems4;
MPI_Get_elements(&status, MPI_DOUBLE, &nElems3);
// Allocate buffer of appropriate size
double *result3 = new double[nElems3];
MPI_Recv(result3,nElems3,MPI_DOUBLE,m,3,MPI_COMM_WORLD,&status);
std::cout<<"Tag is 3, ID is "<<m<<std::endl;
for (int ii=0;ii<nElems3;ii++)
{
std::cout<<result3[ii]<<std::endl;
}
MPI_Probe(m, 4, MPI_COMM_WORLD, &status);
MPI_Get_elements(&status, MPI_DOUBLE, &nElems4);
// Allocate buffer of appropriate size
double *result4 = new double[nElems4];
MPI_Recv(result4,nElems4,MPI_DOUBLE,m,4,MPI_COMM_WORLD,&status);
std::cout<<"Tag is 4, ID is "<<m<<std::endl;
for (int ii=0;ii<nElems4;ii++)
{
std::cout<<result4[ii]<<std::endl;
}
}
}
else
{
// receive three paramters from master;
MPI_Recv(&tmpN,1,MPI_INT,0,0,MPI_COMM_WORLD,&status);
MPI_Recv(&firstBreakPt,1,MPI_INT,0,1,MPI_COMM_WORLD,&status);
MPI_Recv(&lateralBreakPt,1,MPI_INT,0,2,MPI_COMM_WORLD,&status);
// width
int width1 = (rand() % (tmpN-firstBreakPt+1))+ firstBreakPt;
int width2 = (rand() % (tmpN-lateralBreakPt+1))+ lateralBreakPt;
// create dynamic arrays
double *reMat1 = new double[width1*width1];
double *reMat2 = new double[width2*width2];
for (int n=0;n<width1; n++)
{
for (int j=0;j<width1; j++)
{
reMat1[n*width1+j]=(double)rand()/RAND_MAX + (double)rand()/(RAND_MAX*RAND_MAX);
//a[i*Width+j]=1.00;
}
}
for (int k=0;k<width2; k++)
{
for (int h=0;h<width2; h++)
{
reMat2[k*width2+h]=(double)rand()/RAND_MAX + (double)rand()/(RAND_MAX*RAND_MAX);
//a[i*Width+j]=1.00;
}
}
// send it back to master
MPI_Send(reMat1,width1*width1,MPI_DOUBLE,0,3,MPI_COMM_WORLD);
MPI_Send(reMat2,width2*width2,MPI_DOUBLE,0,4,MPI_COMM_WORLD);
}
MPI_Finalize();
std::cin.get();
return 0;
}
P.S. This code is the right answer.
Use collective MPI operations, as Zulan suggested. For example, first thing your code does is that the root sends to all the slaves the same value, which is broadcasting, i.e.,MPI_Bcast(). Then, the root sends to each slave a different value, which is scatter, i.e., MPI_Scatter().
The last operation is that the slave processes send to the root variably-sized data, for which exists the MPI_Gatherv() function. However, to use this function, you need to:
allocate the incoming buffer by the root (there is no malloc() for reMat1 and reMat2 in the first if-branch of your code), therefore, the root needs to know their count,
tell MPI_Gatherv() on the root how many elements will be received from each slave and where to put them.
This problem can be easily solved by so-called parallel prefix, look at MPI_Scan() or MPI_Exscan().
Here you create randomized width
int width1 = (rand() % (tmpN-firstBreakPt+1))+ firstBreakPt;
int width2 = (rand() % (tmpN-lateralBreakPt+1))+ lateralBreakPt;
which you later use to send data back to process 0
MPI_Send(reMat1,width1*width1,MPI_DOUBLE,0,3,MPI_COMM_WORLD);
But it expects different number of
MPI_Recv(reMat1,firstBreakPt*tmpN*firstBreakPt*tmpN,MPI_DOUBLE,m,3,MPI_COMM_WORLD,&status);
which causes problems. It does not know what sizes each slave process generated so you have to send them back the same way you did for sending sizes to them.
I am trying to do an all-to-one communication out-of-order. Basically I have multiple floating point arrays of the same size, identified by an integer id.
Each message should look like:
<int id><float array data>
On the receiver side, it knows exactly how many arrays are there, and thus sets up exact number of recvs. Upon receiving a message, it parses the id and put data into the right place. The problem is that a message could be sent from any other processes to the receiving process. (e.g. the producers have a work queue structure, and process whichever id is available on the queue.)
Since MPI only guarantees P2P in order delivery, I can't trivially put integer id and FP data in two messages, otherwise receiver might not be able to match id with data. MPI doesn't allow two types of data in one send as well.
I can only think of two approaches.
1) Receiver has an array of size m (source[m]), m being number of sending nodes. Sender sends id first, then the data. Receiver saves id to source[i] after receiving an integer message from sender i. Upon receiving a FP array from sender i, it checks source[i], get the id, and moves data to the right place. It works because MPI guarantees in-order P2P communication. It requires receiver to keep state information for each sender. To make matter worse, if a single sending process can have two ids sent before data (e.g. multi-threaded), this mechanism won't work.
2) Treat id and FP as bytes, and copy them into a send buffer. Send them as MPI_CHAR, and receiver casts them back to an integer and a FP array. Then I need to pay the addition cost of copying things into a byte buffer on sender side. The total temporary buffer also grows as I grow number of threads within an MPI process.
Neither of them are perfect solutions. I don't want to lock anything inside a process. I wonder if any of you have better suggestions.
Edit: The code will be run on a shared cluster with infiniband. The machines will be randomly assigned. So I don't think TCP sockets will be able to help me here. In addition, IPoIB looks expensive. I do need the full 40Gbps speed for communication, and keep CPU doing the computation.
You can specify MPI_ANY_SOURCE as the source rank in the receive function, then sort the messages using their tags, which is easier than creating custom messages. Here's a simplified example:
#include <stdio.h>
#include "mpi.h"
int main() {
MPI_Init(NULL,NULL);
int rank=0;
int size=1;
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
MPI_Comm_size(MPI_COMM_WORLD,&size);
// Receiver is the last node for simplicity in the arrays
if (rank == size-1) {
// Receiver has size-1 slots
float data[size-1];
MPI_Request request[size-1];
// Use tags to sort receives
for (int tag=0;tag<size-1;++tag){
printf("Receiver for id %d\n",tag);
// Non-blocking receive
MPI_Irecv(data+tag,1,MPI_FLOAT,
MPI_ANY_SOURCE,tag,MPI_COMM_WORLD,&request[tag]);
}
// Wait for all requests to complete
printf("Waiting...\n");
MPI_Waitall(size-1,request,MPI_STATUSES_IGNORE);
for (size_t i=0;i<size-1;++i){
printf("%f\n",data[i]);
}
} else {
// Producer
int id = rank;
float data = rank;
printf("Sending {%d}{%f}\n",id,data);
MPI_Send(&data,1,MPI_FLOAT,size-1,id,MPI_COMM_WORLD);
}
return MPI_Finalize();
}
As somebody already wrote, you can use MPI_ANY_SOURCE to receive from any source. To send two different kind of data in a single send you can use a derived datatype:
#include <stdio.h>
#include <stdlib.h>
#include "mpi.h"
#define asize 10
typedef struct data_ {
int id;
float array[asize];
} data;
int main() {
MPI_Init(NULL,NULL);
int rank = -1;
int size = -1;
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
MPI_Comm_size(MPI_COMM_WORLD,&size);
data buffer;
// Define and commit a new datatype
int blocklength [2];
MPI_Aint displacement[2];
MPI_Datatype datatypes [2];
MPI_Datatype mpi_tdata;
MPI_Aint startid,startarray;
MPI_Get_address(&(buffer.id),&startid);
MPI_Get_address(&(buffer.array[0]),&startarray);
blocklength [0] = 1;
blocklength [1] = asize;
displacement[0] = 0;
displacement[1] = startarray - startid;
datatypes [0] = MPI_INT;
datatypes [1] = MPI_FLOAT;
MPI_Type_create_struct(2,blocklength,displacement,datatypes,&mpi_tdata);
MPI_Type_commit(&mpi_tdata);
if (rank == 0) {
int count = 0;
MPI_Status status;
while (count < size-1 ) {
// Non-blocking receive
printf("Receiving message %d\n",count);
MPI_Recv(&buffer,1,mpi_tdata,MPI_ANY_SOURCE,0,MPI_COMM_WORLD,&status);
printf("Message tag %d, first entry %g\n",buffer.id,buffer.array[0]);
// Counting the received messages
count++;
}
} else {
// Initialize buffer to be sent
buffer.id = rank;
for (int ii = 0; ii < size; ii++) {
buffer.array[ii] = 10*rank + ii;
}
// Send buffer
MPI_Send(&buffer,1,mpi_tdata,0,0,MPI_COMM_WORLD);
}
MPI_Type_free(&mpi_tdata);
MPI_Finalize();
return 0;
}
I'm trying to establish a SerialPort connection which transfers 16 bit data packages at a rate of 10-20 kHz. Im programming this in C++/CLI. The sender just enters an infinte while-loop after recieving the letter "s" and constantly sends 2 bytes with the data.
A Problem with the sending side is very unlikely, since a more simple approach works perfectly but too slow (in this approach, the reciever sends always an "a" first, and then gets 1 package consisting of 2 bytes. It leads to a speed of around 500Hz).
Here is the important part of this working but slow approach:
public: SerialPort^ port;
in main:
Parity p = (Parity)Enum::Parse(Parity::typeid, "None");
StopBits s = (StopBits)Enum::Parse(StopBits::typeid, "1");
port = gcnew SerialPort("COM16",384000,p,8,s);
port->Open();
and then doing as often as wanted:
port->Write("a");
int i = port->ReadByte();
int j = port->ReadByte();
This is now the actual approach im working with:
static int values[1000000];
static int counter = 0;
void reader(void)
{
SerialPort^ port;
Parity p = (Parity)Enum::Parse(Parity::typeid, "None");
StopBits s = (StopBits)Enum::Parse(StopBits::typeid, "1");
port = gcnew SerialPort("COM16",384000,p,8,s);
port->Open();
unsigned int i = 0;
unsigned int j = 0;
port->Write("s"); //with this command, the sender starts to send constantly
while(true)
{
i = port->ReadByte();
j = port->ReadByte();
values[counter] = j + (i*256);
counter++;
}
}
in main:
Thread^ readThread = gcnew Thread(gcnew ThreadStart(reader));
readThread->Start();
The counter increases (much more) rapidly at a rate of 18472 packages/s, but the values are somehow wrong.
Here is an example:
The value should look like this, with the last 4 bits changing randomly (its a signal of an analogue-digital converter):
111111001100111
Here are some values of the threaded solution given in the code:
1110011001100111
1110011000100111
1110011000100111
1110011000100111
So it looks like the connection reads the data in the middle of the package (to be exact: 3 bits too late). What can i do? I want to avoid a solution where this error is fixed later in the code while reading the packages like this, because I don't know if the the shifting error gets worse when I edit the reading code later, which I will do most likely.
Thanks in advance,
Nikolas
PS: If this helps, here is the code of the sender-side (an AtMega168), written in C.
uint8_t activate = 0;
void uart_puti16(uint16_t val) //function that writes the data to serial port
{
while ( !( UCSR0A & (1<<UDRE0)) ) //wait until serial port is ready
nop(); // wait 1 cycle
UDR0 = val >> 8; //write first byte to sending register
while ( !( UCSR0A & (1<<UDRE0)) ) //wait until serial port is ready
nop(); // wait 1 cycle
UDR0 = val & 0xFF; //write second byte to sending register
}
in main:
while(1)
{
if(active == 1)
{
uart_puti16(read()); //read is the function that gives a 16bit data set
}
}
ISR(USART_RX_vect) //interrupt-handler for a recieved byte
{
if(UDR0 == 'a') //if only 1 single data package is requested
{
uart_puti16(read());
}
if(UDR0 == 's') //for activating constant sending
{
active = 1;
}
if(UDR0 == 'e') //for deactivating constant sending
{
active = 0;
}
}
At the given bit rate of 384,000 you should get 38,400 bytes of data (8 bits of real data plus 2 framing bits) per second, or 19,200 two-byte values per second.
How fast is counter increasing in both instances? I would expect any modern computer to keep up with that rate whether using events or directly polling.
You do not show your simpler approach which is stated to work. I suggest you post that.
Also, set a breakpoint at the line
values[counter] = j + (i*256);
There, inspect i and j. Share the values you see for those variables on the very first iteration through the loop.
This is a guess based entirely on reading the code at http://msdn.microsoft.com/en-us/library/system.io.ports.serialport.datareceived.aspx#Y228. With this caveat out of the way, here's my guess:
Your event handler is being called when data is available to read -- but you are only consuming two bytes of the available data. Your event handler may only be called every 1024 bytes. Or something similar. You might need to consume all the available data in the event handler for your program to continue as expected.
Try to re-write your handler to include a loop that reads until there is no more data available to consume.
I have C++ MPI code that works, in that it compiles and does indeed launch on the specified number of processors (n). The problem is that it simply does the same calculation n times, rather than doing one calculation n times faster.
I have hacked quite a few examples I have found on various sites, and it appears I am missing the proper use of MPI_Send and MPI_Receive, but I can't find an instance these commands that takes a function as input (and am confused as to why these MPI commands would be useful for anything other than functions).
My code is below. Essentially it calls a C++ function I wrote to get Fisher's Exact Test p-value. The random-number bit is just something I put in to test the speed.
What I want is for this program to do is farm out Fisher.TwoTailed with each set of random variables (i.e., A, B, C, and D) to a different processor, rather than doing the exact same calculation on multiple processors. Thanks in advance for any insight--cheers!
Here is the code:
int
main (int argc, char* argv[])
{
int id;
int p;
//
// Initialize MPI.
//
MPI::Init ( argc, argv );
//
// Get the number of processors.
//
p = MPI::COMM_WORLD.Get_size ( );
//
// Get the rank of this processor.
//
id = MPI::COMM_WORLD.Get_rank ( );
FishersExactTest Fishers;
int i = 0;
while (i < 10) {
int A = 0 + rand() % (100 - 0);
int B = 0 + rand() % (100 - 0);
int C = 0 + rand() % (100 - 0);
int D = 0 + rand() % (100 - 0);
cout << Fishers.TwoTailed(A, B, C, D) << endl;
i += 1;
}
MPI::Finalize ( );
return 0;
}
You should look into some basic training about parallel computing and MPI. One good resource that taught me the basics was a free set of online courses put up by the National Center for Supercomputing Applications (NCSA).
You have to tell MPI how to parallelize the code - it won't do it automatically.
In other words, you can't initialize MPI on all the systems and then pass them the same loop. You want to use the id of each processor to determine which part of the loop it will work on. Then you need them to all pass their results back to ID 0.
All of the above answers are perfectly correct. Let me just add a little bit:
Here, since it looks like you're just doing random sampling, all you have to do to get the different processors to generate different random numbers to give to Fishers.TwoTailed is to ensure they all have different seeds to the PRNG:
int
main (int argc, char* argv[])
{
int id;
int p;
//
// Initialize MPI.
//
MPI::Init ( argc, argv );
//
// Get the number of processors.
//
p = MPI::COMM_WORLD.Get_size ( );
//
// Get the rank of this processor.
//
id = MPI::COMM_WORLD.Get_rank ( );
FishersExactTest Fishers;
srand(id); // <--- each rank gets a different seed
int i = 0;
while (i < 10) {
int A = 0 + rand() % (100 - 0);
int B = 0 + rand() % (100 - 0);
int C = 0 + rand() % (100 - 0);
int D = 0 + rand() % (100 - 0);
cout << Fishers.TwoTailed(A, B, C, D) << endl;
i += 1;
}
MPI::Finalize ( );
return 0;
}
Because the loop is from 1..10, you'll still get each process doing 10 samples. If you want them to do a total of 10, you can divide 10 by p and do something to distribute the remainder: eg
int niters = (10+id)/p;
int i=0;
while (i < niters) {
...
}
Well, what messages do you get when you run your MPI job? Just to reiterate what the others have said, you will have to explicitly define what the job of each processor is...for example..if you are rank 0 (default), do this...if you are rank 1 so on..(or some other syntax) that defines the role for each rank. Then you could, based on how you structure your code, have nodes Send/Recv, Gather, Scatter etc.