iOS UDP socket receive delays using multicast addr and port - c++

I'm using a MacBook Pro to send a series of 1024-byte UDP datagrams in my main thread over a socket using a multicast address and port every 12 mS (ugly but illustrative):
for(;;) {
//-------------- Send ----------------
try {
sock.sendTo(filebufPos, readsize, mcAddr, mcPort);
sendCount++;
if(sendCount < sends_needed) {
filebufPos += readsize;
} else {
sendCount = 0; //Reset send counter
filebufPos = filebuf; //Reset pointer to start of file buffer
}
} catch (SocketException &e) {
cerr << e.what() << endl;
}
usleep(12000); //------------ Pause between sends-----------
}
On my iPhone 5, I try to receive the datagrams using a non-blocking 'recvFrom' call on the same multicast address and port within a callback routine that gets called every 1.5 mS, ala:
try {
nBytesReceived = sock->recvFrom((void *)buf, nBytesCount, mcAddr, mcPort);
} catch (SocketException &e) {
cerr << e.what() << endl;
}
I measure the time between successful UDP socket recvs on the iPhone client side. Ideally, I should receive the UDP datagrams every 8 callbacks (12 ms), and for the most part this is the case. However, sometimes the time between recvs is very short, while at other times it can be as long as 100-150 mS between recvs.
Any ideas why this might be happening?
Thanks!

Related

Periodic latency spikes from UDP socket caused by periodic sendto()/recvfrom() delay, C++ for Linux RT-PREEMPT system

I have setup two Raspberry Pis to use UDP sockets, one as the client and one as the server. The kernel has been patched with RT-PREEMPT (4.9.43-rt30+). The client acts as an echo to the server to allow for the calculation of Round-Trip Latency (RTL). At the moment a send frequency of 10Hz is being used on the server side with 2 threads: one for sending the messages to the client and one for receiving the messages from the client. The threads are setup to have a schedule priority of 95 using Round-Robin scheduling.
The server constructs a message containing the time the message was sent and the time past since messages started being sent. This message is sent from the server to the client then immediately returned to the server. Upon receiving the message back from the client the server calculates the Round-Trip Latency and then stores it in a .txt file, to be used for plotting using Python.
The problem is that when analysing the graphs I noticed there is a periodic spike in the RTL. The top graph of the image:RTL latency and sendto() + recvfrom() times. In the legend I have used RTT instead of RTL. These spikes are directly related to the spikes shown in the server side sendto() and recvfrom() calls. Any suggestion on how to remove these spikes as my application is very reliant on consistency?
Things I have tried and noticed:
The size of the message being sent has no effect. I have tried larger messages (1024 bytes) and smaller messages (0 bytes) and the periodic delay does not change. This suggests to me that it is not a buffer issue as there is nothing filling up?
The frequency at which the messages are sent does play a big role, if the frequency is doubled then the latency spikes occur twice as often. This then suggests that something is filling up and while it empties the sendto()/recvfrom() functions experience a delay?
Changes to the buffer size with setsockop() has no effect.
I have tried quite a few other settings (MSG_DONTWAIT, etc) to no avail.
I am by no means an expert in sockets/C++ programming/Linux so any suggestions given will be greatly appreciated as I am out of ideas. Below is the code used to create the socket and start the server threads for sending and receiving the messages. Below that is the code for sending the messages from the server, if you need the rest please let me know but for now my concern is centred around the delay caused by the sendto() function. If you need anything else please let me know. Thanks.
thread_priority = priority;
recv_buff = recv_buff_len;
std::cout << del << " Second start-up delay..." << std::endl;
sleep(del);
std::cout << "Delay complete..." << std::endl;
master = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
// master socket creation
if(master == 0){// Try to create the UDP socket
perror("Could not create the socket: ");
exit(EXIT_FAILURE);
}
std::cout << "Master Socket Created..." << std::endl;
std::cout << "Adjusting send and receive buffers..." << std::endl;
setBuff();
// Server address and port creation
serv.sin_family = AF_INET;// Address family
serv.sin_addr.s_addr = INADDR_ANY;// Server IP address, INADDR_ANY will
work on the server side only
serv.sin_port = htons(portNum);
server_len = sizeof(serv);
// Binding of master socket to specified address and port
if (bind(master, (struct sockaddr *) &serv, sizeof (serv)) < 0) {
//Attempt to bind master socket to address
perror("Could not bind socket...");
exit(EXIT_FAILURE);
}
// Show what address and port is being used
char IP[INET_ADDRSTRLEN];
inet_ntop(AF_INET, &(serv.sin_addr), IP, INET_ADDRSTRLEN);// INADDR_ANY
allows all network interfaces so it will always show 0.0.0.0
std::cout << "Listening on port: " << htons(serv.sin_port) << ", and
address: " << IP << "..." << std::endl;
// Options specific to the server RPi
if(server){
std::cout << "Run Time: " << duration << " seconds." << std::endl;
client.sin_family = AF_INET;// Address family
inet_pton(AF_INET, clientIP.c_str(), &(client.sin_addr));
client.sin_port = htons(portNum);
client_len = sizeof(client);
serv_send = std::thread(&SocketServer::serverSend, this);
serv_send.detach();// The server send thread just runs continuously
serv_receive = std::thread(&SocketServer::serverReceive, this);
serv_receive.join();
}else{// Specific to client RPi
SocketServer::clientReceiveSend();
}
And the code for sending the messages:
// Setup the priority of this thread
param.sched_priority = thread_priority;
int result = sched_setscheduler(getpid(), SCHED_RR, &param);
if(result){
perror ("The following error occurred while setting serverSend() priority");
}
int ched = sched_getscheduler(getpid());
printf("serverSend() priority result %i : Scheduler priority id %i \n", result, ched);
std::ofstream Out;
std::ofstream Out1;
Out.open(file_name);
Out << duration << std::endl;
Out << frequency << std::endl;
Out << thread_priority << std::endl;
Out.close();
Out1.open("Server Side Send.txt");
packets_sent = 0;
Tbegin = std::chrono::high_resolution_clock::now();
// Send messages for a specified time period at a specified frequency
while(!stop){
// Setup the message to be sent
Tstart = std::chrono::high_resolution_clock::now();
TDEL = std::chrono::duration_cast< std::chrono::duration<double>>(Tstart - Tbegin); // Total time passed before sending message
memcpy(&message[0], &Tstart, sizeof(Tstart));// Send the time the message was sent with the message
memcpy(&message[8], &TDEL, sizeof(TDEL));// Send the time that had passed since Tstart
// Send the message to the client
T1 = std::chrono::high_resolution_clock::now();
sendto(master, &message, 16, MSG_DONTWAIT, (struct sockaddr *)&client, client_len);
T2 = std::chrono::high_resolution_clock::now();
T3 = std::chrono::duration_cast< std::chrono::duration<double>>(T2-T1);
Out1 << T3.count() << std::endl;
packets_sent++;
// Pause so that the required message send frequency is met
while(true){
Tend = std::chrono::high_resolution_clock::now();
Tdel = std::chrono::duration_cast< std::chrono::duration<double>>(Tend - Tstart);
if(Tdel.count() > 1/frequency){
break;
}
}
TDEL = std::chrono::duration_cast< std::chrono::duration<double>>(Tend - Tbegin);
// Check to see if the program has run as long as required
if(TDEL.count() > duration){
stop = true;
break;
}
}
std::cout << "Exiting serverSend() thread..." << std::endl;
// Save extra results to the end of the last file
Out.open(file_name, std::ios_base::app);
Out << packets_sent << "\t\t " << packets_returned << std::endl;
Out.close();
Out1.close();
std::cout << "^C to exit..." << std::endl;
I have sorted out the problem. It was not the ARP tables as even with the ARP functionality disabled there was a periodic spike. With the ARP functionality disabled there would only be a single spike in latency as opposed to a series of latency spikes.
It turned out to be a problem with the threads I was using as there were two threads on a CPU only capable of handling one thread at a time. The one thread that was sending the information was being affected by the second thread that was receiving information. I changed the thread priorities around a lot (send priority higher than receive, receive higher than send and send equal to receive) to no avail. I have now bought a Raspberry Pi that has 4 cores and I have set the send thread to run on core 2 while the receive thread runs on core 3, preventing the threads from interfering with each other. This has not only removed the latency spikes but also reduced the mean latency of my setup.

How to interrupt accept() in a TCP/IP server?

I'm working on a vision-application, which have two modes:
1) parameter setting
2) automatic
The problem is in 2), when my app waits for a signal via TCP/IP. The program is freezing while accept()-methode is called. I want to provide the possibility on a GUI to change the mode. So if the mode is changing, it's provided by another signal (message_queue). So I want to interrupt the accept state.
Is there a simple possibility to interrupt the accept?
std::cout << "TCPIP " << std::endl;
client = accept(slisten, (struct sockaddr*)&clientinfo, &clientinfolen);
if (client != SOCKET_ERROR)
cout << "client accepted: " << inet_ntoa(clientinfo.sin_addr) << ":"
<< ntohs(clientinfo.sin_port) << endl;
//receive the message from client
//recv returns the number of bytes received!!
//buf contains the data received
int rec = recv(client, buf, sizeof(buf), 0);
cout << "Message: " << rec << " bytes and the message " << buf << endl;
I read about select() but I have no clue how to use it. Could anybody give me a hint how to implement for example select() in my code?
Thanks.
Best regards,
T
The solution is to call accept() only when there is an incoming connection request. You do that by polling on the listen socket, where you can also add other file descriptors, use a timeout etc.
You did not mention your platform. On Linux, see epoll(), UNIX see poll()/select(), Windows I don't know.
A general way would be to use a local TCP connection by which the UI thread could interrupt the select call. The general architecture would use:
a dedicated thread waiting with select on both slisten and the local TCP connection
a TCP connection (Unix domain socket on a Unix or Unix-like system, or 127.0.0.1 on Windows) between the UI thread and the waiting one
various synchronizations/messages between both threads as required
Just declare that select should read slisten and the local socket. It will return as soon as one is ready, and you will be able to know which one is ready.
As you haven't specified your platform, and networking, especially async, is platform-specific, I suppose you need a cross-platform solution. Boost.Asio fits perfectly here: http://www.boost.org/doc/libs/1_39_0/doc/html/boost_asio/reference/basic_socket_acceptor/async_accept/overload1.html
Example from the link:
void accept_handler(const boost::system::error_code& error)
{
if (!error)
{
// Accept succeeded.
}
}
...
boost::asio::ip::tcp::acceptor acceptor(io_service);
...
boost::asio::ip::tcp::socket socket(io_service);
acceptor.async_accept(socket, accept_handler);
If Boost is a problem, Asio can be a header-only lib and used w/o Boost: http://think-async.com/Asio/AsioAndBoostAsio.
One way would be to run select in a loop with a timeout.
Put slisten into nonblocking mode (this isn't strictly necessary but sometimes accept blocks even when select says otherwise) and then:
fd_set read_fds;
FD_ZERO(&read_fds);
FD_SET(slisten, &read_fds);
struct timeval timeout;
timeout.tv_sec = 1; // 1s timeout
timeout.tv_usec = 0;
int select_status;
while (true) {
select_status = select(slisten+1, &read_fds, NULL, NULL, &timeout);
if (select_status == -1) {
// ERROR: do something
} else if (select_status > 0) {
break; // we have data, we can accept now
}
// otherwise (i.e. select_status==0) timeout, continue
}
client = accept(slisten, ...);
This will allow you to catch signals once per second. More info here:
http://man7.org/linux/man-pages/man2/select.2.html
and Windows version (pretty much the same):
https://msdn.microsoft.com/pl-pl/library/windows/desktop/ms740141(v=vs.85).aspx

Winsock & IOCP: UDP Server. Async client management

I have a winsock-server, accepting packets from a local IP, which currently works without using IOCP. I want it to be non-blocking though, working through IOCP. Yes I know about the alternatives (select, WSAAsync etc.), but this won't do it for developing an MMO server.
So here's the question - how do I do this using std::thread and IOCP?
I already know that GetQueuedCompletionStatus() dequeues packets, while PostQueuedCompletionStatus() queues those to the IOCP.
Is this the proper way to do it async though?
How can I threat all clients equally on about 10 threads? I thought about receiving UDP packets and processing those while IOCP has something in queue, but packets will be processed by max 10 at a time and I also have an infinite loop in each thread.
The target is creating a game server, capable of holding thousands of clients at the same time.
About the code: netListener() is a class, holding packets received from the listening network interface in a vector. All it does in Receive() is
WSARecvFrom(sockfd, &buffer, 1, &bytesRecv, &flags, (SOCKADDR*)&senderAddr, &size, &overl, 0);
std::cout << "\n\nReceived " << bytesRecv << " bytes.\n" << "Packet [" << std::string(buffer.buf, bytesRecv)<< "]\n";*
The code works, buffer shows what I've sent to myself, but I'm not sure whether having only ONE receive() will suffice.
About blocking - yes, I realized that putting listener.Receive() into a separate thread doesn't block the main thread. But imagine this - lots of clients try to send packets, can one receive process them all? Not to mention I was planning to queue an IOCP packet on each receive, but still not sure how to do this properly.
And another question - is it possible to establish a direct connection between a client and another client? If you host a server on a local machine behind NAT and you want it to be accessible from the internet, for example.
Threads:
void Host::threadFunc(int i) {
threadMutex.lock();
for (;;) {
if (m_Init) {
if (GetQueuedCompletionStatus(iocp, &bytesReceived, &completionKey, (LPOVERLAPPED*)&overl, WSA_INFINITE)) {
std::cout << "1 completion packet dequeued, bytes: " << bytesReceived << std::endl;
}
}
}
threadMutex.unlock(); }
void Host::createThreads() {
//Create threads
for (unsigned int i = 0; i < SystemInfo.dwNumberOfProcessors; ++i) {
threads.push_back(std::thread(&Host::threadFunc, this, i));
if (threads[i].joinable()) threads[i].detach();
}
std::cout << "Threads created: " << threads.size() << std::endl; }
Host
Host::Host() {
using namespace std;
InitWSA();
createThreads();
m_Init = true;
SecureZeroMemory((PVOID)&overl, sizeof(WSAOVERLAPPED));
overl.hEvent = WSACreateEvent();
iocp = CreateIoCompletionPort((HANDLE)sockfd, iocp, 0, threads.size());
listener = netListener(sockfd, overl, 12); //12 bytes buffer size
for (int i = 0; i < 4; ++i) { //IOCP queue test
if (PostQueuedCompletionStatus(iocp, 150, completionKey, &overl)) {
std::cout << "1 completion packet queued\n";
}
}
std::cin.get();
listener.Receive(); //Packet receive test - adds a completion packet n bytes long if client sent one
std::cin.get();}

send() not reporting ENOTCONN when client has closed the connection (AS400)

This is on an AS400 (IBM i, iSeries, et al).
I have a small Java program that I use to send test files to a server written in C++, which also runs on the IBM i. In my Java program I am setting my timeout for a response to be 5, lets say. In the server I am randomly sleeping for 0 to 10 seconds. When the Java program times out, it throws java.net.SocketTimeoutException, closes the socket with .close() and exits. The server program just goes ahead after its sleep and calls send(). Thing is, send() does not fail with -1 and give ENOTCONN. Why? Also inet_ntop() on the socket gives me the remote IP and port that connected to the server, as though the socket were still connected. Scratching my head.
EDIT: After disappointment with poll(), I found select() will report an error with FD_ISSET() when setting the errors set. In my case, select() returns 3, indicating that 3 conditions (read, write and error) are set for my one socket. You can't find out what the error is, at least I don't know yet how to find out.
fd_set read_set, write_set, error_set;
FD_ZERO(&read_set);
FD_ZERO(&write_set);
FD_ZERO(&error_set);
FD_SET(sock_fd, &read_set);
FD_SET(sock_fd, &write_set);
FD_SET(sock_fd, &error_set);
struct timeval timeout;
timeout.tv_sec = 10; // reset this on every new iteration.
timeout.tv_usec = 0;
int rc = select(sock_fd + 1, &read_set, &write_set, &error_set, &timeout);
CERR << "select() returned " << rc << endl;
if (rc >= 0) {
if (FD_ISSET(sock_fd, &read_set)) {
CERR << "ready to read" << endl;
}
if (FD_ISSET(sock_fd, &write_set)) {
CERR << "ready to write" << endl;
}
if (FD_ISSET(sock_fd, &error_set)) {
CERR << "has an error" << endl;
CERR << "errno=" << errno << ", " << strerror(errno) << endl;
}
}
From man send:
ENOTCONN
The socket is not connected, and no target has been given.
In other words your expectations are incorrect. ENOTCONN is for the case when you haven't connected the socket. It doesn't have anything to do with the peer disconnecting. That case will eventually cause ECONNRESET, but not on the first such send, because of TCP buffering.
Working as designed.

Long delays in sending UDP packets

I have an application that receives, processes, and transmits UDP packets.
Everything works fine if the port numbers for reception and transmission are different.
If the port numbers are the same and the IP addresses are different it usually works fine EXCEPT when the IP address are on the same subnet as the machine running the application. In that last case the send_to function requires several seconds to complete, instead of a few milliseconds as is usual.
Rx Port Tx IP Tx Port Result
5001 Same 5002 OK Delay ~ 0.001 secs
subnet
5001 Different 5001 OK Delay ~ 0.001 secs
subnet
5001 Same 5001 Fails Delay > 2 secs
subnet
Here is a short program that demonstrates the problem.
#include <ctime>
#include <iostream>
#include <string>
#include <boost/array.hpp>
#include <boost/asio.hpp>
using boost::asio::ip::udp;
using std::cout;
using std::endl;
int test( const std::string& output_IP)
{
try
{
unsigned short prev_seq_no;
boost::asio::io_service io_service;
// build the input socket
/* This is connected to a UDP client that is running continuously
sending messages that include an incrementing sequence number
*/
const int input_port = 5001;
udp::socket input_socket(io_service, udp::endpoint(udp::v4(), input_port ));
// build the output socket
const std::string output_Port = "5001";
udp::resolver resolver(io_service);
udp::resolver::query query(udp::v4(), output_IP, output_Port );
udp::endpoint output_endpoint = *resolver.resolve(query);
udp::socket output_socket( io_service );
output_socket.open(udp::v4());
// double output buffer size
boost::asio::socket_base::send_buffer_size option( 8192 * 2 );
output_socket.set_option(option);
cout << "TX to " << output_endpoint.address() << ":" << output_endpoint.port() << endl;
int count = 0;
for (;;)
{
// receive packet
unsigned short recv_buf[ 20000 ];
udp::endpoint remote_endpoint;
boost::system::error_code error;
int bytes_received = input_socket.receive_from(boost::asio::buffer(recv_buf,20000),
remote_endpoint, 0, error);
if (error && error != boost::asio::error::message_size)
throw boost::system::system_error(error);
// start timer
__int64 TimeStart;
QueryPerformanceCounter( (LARGE_INTEGER *)&TimeStart );
// send onwards
boost::system::error_code ignored_error;
output_socket.send_to(
boost::asio::buffer(recv_buf,bytes_received),
output_endpoint, 0, ignored_error);
// stop time and display tx time
__int64 TimeEnd;
QueryPerformanceCounter( (LARGE_INTEGER *)&TimeEnd );
__int64 f;
QueryPerformanceFrequency( (LARGE_INTEGER *)&f );
cout << "Send time secs " << (double) ( TimeEnd - TimeStart ) / (double) f << endl;
// stop after loops
if( count++ > 10 )
break;
}
}
catch (std::exception& e)
{
std::cerr << e.what() << std::endl;
}
}
int main( )
{
test( "193.168.1.200" );
test( "192.168.1.200" );
return 0;
}
The output from this program, when running on a machine with address 192.168.1.101
TX to 193.168.1.200:5001
Send time secs 0.0232749
Send time secs 0.00541566
Send time secs 0.00924535
Send time secs 0.00449014
Send time secs 0.00616714
Send time secs 0.0199299
Send time secs 0.00746081
Send time secs 0.000157972
Send time secs 0.000246775
Send time secs 0.00775578
Send time secs 0.00477618
Send time secs 0.0187321
TX to 192.168.1.200:5001
Send time secs 1.39485
Send time secs 3.00026
Send time secs 3.00104
Send time secs 0.00025927
Send time secs 3.00163
Send time secs 2.99895
Send time secs 6.64908e-005
Send time secs 2.99864
Send time secs 2.98798
Send time secs 3.00001
Send time secs 3.00124
Send time secs 9.86207e-005
Why is this happening? Is there any way I can reduce the delay?
Notes:
Built using code::blocks, running under various flavours of Windows
Packet are 10000 bytes long
The problem goes away if I connect the computer running the application to a second network. For example a WWLAN ( cellular network "rocket stick" )
As far as I can tell, this is the situation we have:
This works ( different ports, same LAN ):
This also works ( same ports, different LANS ):
This does NOT work ( same ports, same LAN ):
This seems to work ( same ports, same LAN, dual homed Module2 host )
Given this is being observed on Windows for large datagrams with a destination address of a non-existent peer within the same subnet as the sender, the problem is likely the result of send() blocking waiting for an Address Resolution Protocol (ARP) response so that the layer2 ethernet frame can populated:
When sending data, the layer2 ethernet frame will be populated with the media access control (MAC) Address of the next hop in the route. If the sender does not know the MAC Address for the next hop, it will broadcast an ARP request and cache responses. Using the sender's subnet mask and the destination address, the sender can determine if the next hop is on the same subnet as the sender or if the data must route through the default gateway. Based on the results in the question, when sending large datagrams:
datagrams destined to a different subnet have no delay because the default gateway's MAC Address is within the sender's ARP cache
datagrams destined to a non-existent peer on the sender's subnet incur a delay waiting for ARP resolution
The socket's send buffer size (SO_SNDBUF) is being set to 16384 bytes, but the size of datagrams being sent are 10000. It is unspecified as to the behavior behavior of send() when the buffer is saturated, but some systems will observe send() blocking. In this case, saturation would occur fairly quickly if any datagrams incur a delay, such as by waiting for an ARP response.
// Datagrams being sent are 10000 bytes, but the socket buffer is 16384.
boost::asio::socket_base::send_buffer_size option(8192 * 2);
output_socket.set_option(option);
Consider letting the kernel manage the socket buffer size or increasing it based on your expected throughput.
When sending a datagram with a size that exceeds the Window's registry FastSendDatagramThreshold‌ parameter, the send() call can block until the datagram has been sent. For more details, see the Microsoft TCP/IP Implementation Details:
Datagrams smaller than the value of this parameter go through the fast I/O path or are buffered on send. Larger ones are held until the datagram is actually sent. The default value was found by testing to be the best overall value for performance. Fast I/O means copying data and bypassing the I/O subsystem, instead of mapping memory and going through the I/O subsystem. This is advantageous for small amounts of data. Changing this value is not generally recommended.
If one is observing delays for each send() to an existing peer on the sender's subnet, then profile and analyze the network:
Use iperf to measure the network potential throughput
Use wireshark to get a deeper view into what is occurring on a given node. Look for ARP request and responses.
From the sender's machine, ping the peer and then check the APR cache. Verify that there is a cache entry for the peer and that it is correct.
Try a different port and/or TCP. This can help identify if a networks policies are throttling or shaping traffic for a particular port or protocol.
Also note that sending datagrams below the FastSendDatagramThreshold value in quick succession while waiting for ARP to resolve may cause datagrams to be discarded:
ARP queues only one outbound IP datagram for a specified destination address while that IP address is being resolved to a media access control address. If a User Datagram Protocol (UDP)-based application sends multiple IP datagrams to a single destination address without any pauses between them, some of the datagrams may be dropped if there is no ARP cache entry already present. An application can compensate for this by calling the iphlpapi.dll routine SendArp() to establish an ARP cache entry, before sending the stream of packets.
Alright, put together some code (below). It is clear that send takes less than one milisecond most of the time. This proves the problem is with the boost.
#include <iostream>
#include <string>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <stdexcept>
#include <poll.h>
#include <string>
#include <memory.h>
#include <chrono>
#include <stdio.h>
void test( const std::string& remote, const std::string& hello_string, bool first)
{
try
{
const short unsigned input_port = htons(5001);
int sock = socket(AF_INET, SOCK_DGRAM, 0);
if (sock == -1) {
perror("Socket creation error: ");
throw std::runtime_error("Could not create socket!");
}
sockaddr_in local_addr;
local_addr.sin_port = input_port;
local_addr.sin_addr.s_addr = INADDR_ANY;
if (bind(sock, (const sockaddr*)&local_addr, sizeof(local_addr))) {
perror("Error: ");
throw std::runtime_error("Can't bind to port!");
}
sockaddr_in remote_addr;
remote_addr.sin_port = input_port;
if (!inet_aton(remote.c_str(), &remote_addr.sin_addr))
throw std::runtime_error("Can't parse remote IP address!");
std::cout << "TX to " << remote << "\n";
unsigned char recv_buf[40000];
if (first) {
std::cout << "First launched, waiting for hello.\n";
int bytes = recv(sock, &recv_buf, sizeof(recv_buf), 0);
std::cout << "Seen hello from my friend here: " << recv_buf << ".\n";
}
int count = 0;
for (;;)
{
std::chrono::high_resolution_clock::time_point start = std::chrono::high_resolution_clock::now();
if (sendto(sock, hello_string.c_str(), hello_string.size() + 1, 0, (const sockaddr*)&remote_addr, sizeof(remote_addr)) != hello_string.size() + 1) {
perror("Sendto error: ");
throw std::runtime_error("Error sending data");
}
std::chrono::high_resolution_clock::time_point end = std::chrono::high_resolution_clock::now();
std::cout << "Send time nanosecs " << std::chrono::duration_cast<std::chrono::nanoseconds>(end - start).count() << "\n";
int bytes = recv(sock, &recv_buf, sizeof(recv_buf), 0);
std::cout << "Seen hello from my friend here: " << recv_buf << ".\n";
// stop after loops
if (count++ > 10)
break;
}
}
catch (std::exception& e)
{
std::cerr << e.what() << std::endl;
}
}
int main(int argc, char* argv[])
{
test(argv[1], argv[2], *argv[3] == 'f');
return 0;
}
As expected, there is no delay. Here is output from one of the pairs (I run the code in pairs on two machines in the same network):
./socktest x.x.x.x 'ThingTwo' f
TX to x.x.x.x
First launched, waiting for hello.
Seen hello from my friend here: ThingOne.
Send time nanosecs 17726
Seen hello from my friend here: ThingOne.
Send time nanosecs 6479
Seen hello from my friend here: ThingOne.
Send time nanosecs 6362
Seen hello from my friend here: ThingOne.
Send time nanosecs 6048
Seen hello from my friend here: ThingOne.
Send time nanosecs 6246
Seen hello from my friend here: ThingOne.
Send time nanosecs 5691
Seen hello from my friend here: ThingOne.
Send time nanosecs 5665
Seen hello from my friend here: ThingOne.
Send time nanosecs 5930
Seen hello from my friend here: ThingOne.
Send time nanosecs 6082
Seen hello from my friend here: ThingOne.
Send time nanosecs 5493
Seen hello from my friend here: ThingOne.
Send time nanosecs 5893
Seen hello from my friend here: ThingOne.
Send time nanosecs 5597
It is good practice to segregate Tx and Rx ports. I derive my own socket class from CAsynchSocket as it has a message pump that sends a system message when data is received on your socket and yanks the OnReceive function (either yours if u overide the underlying virtual function or the default if you dont