MPI - how to send avalue to a specific position in array - c++

I want so send a value to a position in array of another process.
so
1st process: MPI_ISend (&val..., process, ..)
2nd process: MPI_Recv (&array[i], ..., process, ...)
So I know the i number on the first process, I also know, that I can't use a variable - first send i and then val, as other processes can change i ( 2nd process is accepting messages from many others).

First of all other send/receives should not/cannot overwrite i. You should keep your messages clear and separated. That's what the tag is for! Also rank_2 can separate which rank did send the data. So you can have one i for every rank you await a message from.
Finally you might want to check out one-sided MPI communication (MPI_Win). With that technique rank_1 can 'drop' the message directly into rank_2's array at the position only known to rank_1.

Related

c++ streaming udp data into a queue?

I am streaming data as a string over UDP, into a Socket class inside Unreal engine. This is threaded, and runs in the background.
My read function is:
float translate;
void FdataThread::ReceiveUDP()
{
uint32 Size;
TArray<uint8> ReceivedData;
if (ReceiverSocket->HasPendingData(Size))
{
int32 Read = 0;
ReceivedData.SetNumUninitialized(FMath::Min(Size, 65507u));
ReceiverSocket->RecvFrom(ReceivedData.GetData(), ReceivedData.Num(), Read, *targetAddr);
}
FString str = FString(bytesRead, UTF8_TO_TCHAR((const UTF8CHAR *)ReceivedData));
translate = FCString::Atof(*str);
}
I then call the translate variable from another class, on a Tick, or timer.
My test case sends an incrementing number from another application.
If I print this number from inside the above Read function, it looks as expected, counting up incrementally.
When i print it from the other thread, it is missing some of the numbers.
I believe this is because I call it on the Tick, so it misses out some data due to processing time.
My question is:
Is there a way to queue the incoming data, so that when i pull the value, it is the next incremental value and not the current one? What is the best way to go about this?
Thank you, please let me know if I have not been clear.
Is this the complete code? ReceivedData isn't used after it's filled with data from the socket. Instead, an (in this code) undefined variable 'buffer' is being used.
Also, it seems that the while loop could run multiple times, overwriting old data in the ReceivedData buffer. Add some debugging messages to see whether RecvFrom actually reads all bytes from the socket. I believe it reads only one 'packet'.
Finally, especially when you're using UDP sockets over the network, note that the UDP protocol isn't guaranteed to actually deliver its packets. However, I doubt this is causing your problems if you're using it on a single computer or a local network.
Your read loop doesn't make sense. You are reading and throwing away all datagrams but the last in any given sequence that happen to be in the socket receive buffer at the same time. The translate call should be inside the loop, and the loop should be while(true), or while (running), or similar.

Retrieve buffer with multiple overlapped I/O requests

There is something I'd like to know about overlapped I/O under windows, both with and without I/O completion ports.
I know in advance how many packets I will be receiving after using WSASend().
So I'd like to do that
for (int i = 0; i < n; i++)
WSARecv(sock, &buffer_array[i], 1, NULL, 0, &overlapped, completion_routine);
My problem is : how can I know which buffer has been filled upon notification the buffer has been filled? I mean, without guessing by the order of the calls (buffer[0], buffer[1], buffer[2], etc.).
I would find an alternative solution that gives me the buffer pointer at the time of the notification much more clean for example, and more easily changeable/adaptable as the design of my application evolves.
Thanks.
Right now you are starting n concurrent receive operations. Instead, start them one after the other. Start the next one when the previous one has completed.
When using a completion routine, the hEvent field in the OVERLAPPED block is unused and can be used to pass context info into the completion routine. Typically, this would be a pointer to a buffer class instance or an index to an array of buffer instances. Often, the OVL block would be a struct member of the instance since you need a separate OVL per call.

How to determine length of buffer at client side

I have a server sending a multi-dimensional character array
char buff1[][3] = { {0xff,0xfd,0x18} , {0xff,0xfd,0x1e} , {0xff,0xfd,21} }
In this case the buff1 carries 3 messages (each having 3 characters). There could be multiple instances of buffers on server side with messages of variable length (Note : each message will always have 3 characters). viz
char buff2[][3] = { {0xff,0xfd,0x20},{0xff,0xfd,0x27}}
How should I store the size of these buffers on client side while compiling the code.
The server should send information about the length (and any other structure) of the message with the message as part of the message.
An easy way to do that is to send the number of bytes in the message first, then the bytes in the message. Often you also want to send the version of the protocol (so you can detect mismatches) and maybe even a message id header (so you can send more than one kind of message).
If blazing fast performance isn't the goal (and you are talking over a network interface, which tends to be slower than computers: parsing may be cheap enough that you don't care), using a higher level protocol or format is sometimes a good idea (json, xml, whatever). This also helps with debugging problems, because instead of debugging your custom protocol, you get to debug the higher level format.
Alternatively, you can send some sign that the sequence has terminated. If there is a value that is never a valid sequence element (such as 0,0,0), you could send that to say "no more data". Or you could send each element with a header saying if it is the last element, or the header could say that this element doesn't exist and the last element was the previous one.

measuring concurent loop times in erlang

I create a round of processes in erlang and wish to measure the time that it took for the first message to pass throigh the network and the entire message series, each time the first node gets the message back it sends another one.
right now in the first node i have the following code:
receive
stop->
io:format("all processes stopped!~n"),
true;
start->
statistics(runtime),
Son!{number, 1},
msg(PID, Son, M, 1);
{_, M} ->
{Time1, _} = statistics(runtime),
io:format("The last message has arrived after ~p! ~n",[Time1*1000]),
Son!stop;
of course i start the statistics when sending the first message.
as you can see i use the Time_Since_Last_Call for the first message loop and wish to use the Total_Run_Time for the entire run, the problem is that Total_Run_Time is accumulative since i start the statistics for the first time.
The second thought i had in mind is using another process with 2 receive loops getting the times for each one adding them and printing but i'm sure that erlang can do better than this.
i guess the best method to solve this is somehow flush the Total_Run_Time, but i couldn't find how this could be done. any ideas how this can be tackled?
One way to measure round-trip times would be to send a timestamp along with each message. When the first node receives the message, it can then measure the round-trip time, calculating Total_Run_Time - Timestamp.
To calculate the total run time, I would memorize the first timestamp in the process state (or dictionary), and calculate the total run time when stopping the test.
Besides, given that you mention the network, are you sure that the CPU time (which is what statistics(runtime) calculates is what you're after? Perhaps, wall clock time would be more appropriate.

recv windows, one byte per call, what the?

c++
#define BUF_LEN 1024
the below code only receives one byte when its called then immediately moves on.
output = new char[BUF_LEN];
bytes_recv = recv(cli, output, BUF_LEN, 0);
output[bytes_recv] = '\0';
Any idea how to make it receive more bytes?
EDIT: the client connecting is Telnet.
The thing to remember about networking is that you will be able to read as much data as has been received. Since your code is asking for 1024 bytes and you only read 1, then only 1 byte has been received.
Since you are using a telnet client, it sounds like you have it configured in character mode. In this mode, as soon as you type a character, it will be sent.
Try to reconfigure your telnet client in line mode. In line mode, the telnet client will wait until you hit return before it sends the entire line.
On my telnet client. In order to do that, first I type ctrl-] to get to the telnet prompt and then type "mode line" to configure telnet in line mode.
Update
On further thought, this is actually a very good problem to have.
In the real world, your data can get fragmented in unexpected ways. The client may make a single send() call of N bytes but the data may not arrive in a single packet. If your code can handle byte arriving 1 by 1, then you know it will work know matter how the data arrives.
What you need to do is make sure that you accumulate your data across multiple receives. After your recv call returns, you should then append the data a buffer. Something like:
char *accumulate_buffer = new char[BUF_LEN];
size_t accumulate_buffer_len = 0;
...
bytes_recv = recv(fd,
accumulate_buffer + accumulate_buffer_len,
BUF_LEN - accumulate_buffer_len,
0);
if (bytes_recv > 0)
accumulate_buffer_len += bytes_recv;
if (can_handle_data(accumulate_buffer, accumulate_buffer_len))
{
handle_data(accumulate_buffer, accumulate_buffer_len);
accumulate_buffer_len = 0;
}
This code keeps accumulating the recv into a buffer until there is enough data to handle. Once you handle the data, you reset the length to 0 and you start accumulating afresh.
First, in this line:
output[bytes_recv] = '\0';
you need to check if bytes_recv < 0 first before you do that because you might have an error. And the way your code currently works, you'll just randomly stomp on some random piece of memory (likely the byte just before the buffer).
Secondly, the fact you are null terminating your buffer indicates that you're expecting to receive ASCII text with no embedded null characters. Never assume that, you will be wrong at the worst possible time.
Lastly stream sockets have a model that's basically a very long piece of tape with lots of letters stamped on it. There is no promise that the tape is going to be moving at any particular speed. When you do a recv call you're saying "Please give me as many letters from the tape as you have so far, up to this many.". You may get as many as you ask for, you may get only 1. No promises. It doesn't matter how the other side spit bits of the tape out, the tape is going through an extremely complex bunch of gears and you just have no idea how many letters are going to be coming by at any given time.
If you care about certain groupings of characters, you have to put things in the stream (ont the tape) saying where those units start and/or end. There are many ways of doing this. Telnet itself uses several different ones in different circumstances.
And on the receiving side, you have to look for those markers and put the sequences of characters you want to treat as a unit together yourself.
So, if you want to read a line, you have to read until you get a '\n'. If you try to read 1024 bytes at a time, you have to take into account that the '\n' might end up in the middle of your buffer and so your buffer may contain the line you want and part of the next line. It might even contain several lines. The only promise is that you won't get more characters than you asked for.
Force the sending side to send more bytes using Nagle's algorithm, then you will receive them in packages.