I have an application that receives, processes, and transmits UDP packets.
Everything works fine if the port numbers for reception and transmission are different.
If the port numbers are the same and the IP addresses are different it usually works fine EXCEPT when the IP address are on the same subnet as the machine running the application. In that last case the send_to function requires several seconds to complete, instead of a few milliseconds as is usual.
Rx Port Tx IP Tx Port Result
5001 Same 5002 OK Delay ~ 0.001 secs
subnet
5001 Different 5001 OK Delay ~ 0.001 secs
subnet
5001 Same 5001 Fails Delay > 2 secs
subnet
Here is a short program that demonstrates the problem.
#include <ctime>
#include <iostream>
#include <string>
#include <boost/array.hpp>
#include <boost/asio.hpp>
using boost::asio::ip::udp;
using std::cout;
using std::endl;
int test( const std::string& output_IP)
{
try
{
unsigned short prev_seq_no;
boost::asio::io_service io_service;
// build the input socket
/* This is connected to a UDP client that is running continuously
sending messages that include an incrementing sequence number
*/
const int input_port = 5001;
udp::socket input_socket(io_service, udp::endpoint(udp::v4(), input_port ));
// build the output socket
const std::string output_Port = "5001";
udp::resolver resolver(io_service);
udp::resolver::query query(udp::v4(), output_IP, output_Port );
udp::endpoint output_endpoint = *resolver.resolve(query);
udp::socket output_socket( io_service );
output_socket.open(udp::v4());
// double output buffer size
boost::asio::socket_base::send_buffer_size option( 8192 * 2 );
output_socket.set_option(option);
cout << "TX to " << output_endpoint.address() << ":" << output_endpoint.port() << endl;
int count = 0;
for (;;)
{
// receive packet
unsigned short recv_buf[ 20000 ];
udp::endpoint remote_endpoint;
boost::system::error_code error;
int bytes_received = input_socket.receive_from(boost::asio::buffer(recv_buf,20000),
remote_endpoint, 0, error);
if (error && error != boost::asio::error::message_size)
throw boost::system::system_error(error);
// start timer
__int64 TimeStart;
QueryPerformanceCounter( (LARGE_INTEGER *)&TimeStart );
// send onwards
boost::system::error_code ignored_error;
output_socket.send_to(
boost::asio::buffer(recv_buf,bytes_received),
output_endpoint, 0, ignored_error);
// stop time and display tx time
__int64 TimeEnd;
QueryPerformanceCounter( (LARGE_INTEGER *)&TimeEnd );
__int64 f;
QueryPerformanceFrequency( (LARGE_INTEGER *)&f );
cout << "Send time secs " << (double) ( TimeEnd - TimeStart ) / (double) f << endl;
// stop after loops
if( count++ > 10 )
break;
}
}
catch (std::exception& e)
{
std::cerr << e.what() << std::endl;
}
}
int main( )
{
test( "193.168.1.200" );
test( "192.168.1.200" );
return 0;
}
The output from this program, when running on a machine with address 192.168.1.101
TX to 193.168.1.200:5001
Send time secs 0.0232749
Send time secs 0.00541566
Send time secs 0.00924535
Send time secs 0.00449014
Send time secs 0.00616714
Send time secs 0.0199299
Send time secs 0.00746081
Send time secs 0.000157972
Send time secs 0.000246775
Send time secs 0.00775578
Send time secs 0.00477618
Send time secs 0.0187321
TX to 192.168.1.200:5001
Send time secs 1.39485
Send time secs 3.00026
Send time secs 3.00104
Send time secs 0.00025927
Send time secs 3.00163
Send time secs 2.99895
Send time secs 6.64908e-005
Send time secs 2.99864
Send time secs 2.98798
Send time secs 3.00001
Send time secs 3.00124
Send time secs 9.86207e-005
Why is this happening? Is there any way I can reduce the delay?
Notes:
Built using code::blocks, running under various flavours of Windows
Packet are 10000 bytes long
The problem goes away if I connect the computer running the application to a second network. For example a WWLAN ( cellular network "rocket stick" )
As far as I can tell, this is the situation we have:
This works ( different ports, same LAN ):
This also works ( same ports, different LANS ):
This does NOT work ( same ports, same LAN ):
This seems to work ( same ports, same LAN, dual homed Module2 host )
Given this is being observed on Windows for large datagrams with a destination address of a non-existent peer within the same subnet as the sender, the problem is likely the result of send() blocking waiting for an Address Resolution Protocol (ARP) response so that the layer2 ethernet frame can populated:
When sending data, the layer2 ethernet frame will be populated with the media access control (MAC) Address of the next hop in the route. If the sender does not know the MAC Address for the next hop, it will broadcast an ARP request and cache responses. Using the sender's subnet mask and the destination address, the sender can determine if the next hop is on the same subnet as the sender or if the data must route through the default gateway. Based on the results in the question, when sending large datagrams:
datagrams destined to a different subnet have no delay because the default gateway's MAC Address is within the sender's ARP cache
datagrams destined to a non-existent peer on the sender's subnet incur a delay waiting for ARP resolution
The socket's send buffer size (SO_SNDBUF) is being set to 16384 bytes, but the size of datagrams being sent are 10000. It is unspecified as to the behavior behavior of send() when the buffer is saturated, but some systems will observe send() blocking. In this case, saturation would occur fairly quickly if any datagrams incur a delay, such as by waiting for an ARP response.
// Datagrams being sent are 10000 bytes, but the socket buffer is 16384.
boost::asio::socket_base::send_buffer_size option(8192 * 2);
output_socket.set_option(option);
Consider letting the kernel manage the socket buffer size or increasing it based on your expected throughput.
When sending a datagram with a size that exceeds the Window's registry FastSendDatagramThreshold‌ parameter, the send() call can block until the datagram has been sent. For more details, see the Microsoft TCP/IP Implementation Details:
Datagrams smaller than the value of this parameter go through the fast I/O path or are buffered on send. Larger ones are held until the datagram is actually sent. The default value was found by testing to be the best overall value for performance. Fast I/O means copying data and bypassing the I/O subsystem, instead of mapping memory and going through the I/O subsystem. This is advantageous for small amounts of data. Changing this value is not generally recommended.
If one is observing delays for each send() to an existing peer on the sender's subnet, then profile and analyze the network:
Use iperf to measure the network potential throughput
Use wireshark to get a deeper view into what is occurring on a given node. Look for ARP request and responses.
From the sender's machine, ping the peer and then check the APR cache. Verify that there is a cache entry for the peer and that it is correct.
Try a different port and/or TCP. This can help identify if a networks policies are throttling or shaping traffic for a particular port or protocol.
Also note that sending datagrams below the FastSendDatagramThreshold value in quick succession while waiting for ARP to resolve may cause datagrams to be discarded:
ARP queues only one outbound IP datagram for a specified destination address while that IP address is being resolved to a media access control address. If a User Datagram Protocol (UDP)-based application sends multiple IP datagrams to a single destination address without any pauses between them, some of the datagrams may be dropped if there is no ARP cache entry already present. An application can compensate for this by calling the iphlpapi.dll routine SendArp() to establish an ARP cache entry, before sending the stream of packets.
Alright, put together some code (below). It is clear that send takes less than one milisecond most of the time. This proves the problem is with the boost.
#include <iostream>
#include <string>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <stdexcept>
#include <poll.h>
#include <string>
#include <memory.h>
#include <chrono>
#include <stdio.h>
void test( const std::string& remote, const std::string& hello_string, bool first)
{
try
{
const short unsigned input_port = htons(5001);
int sock = socket(AF_INET, SOCK_DGRAM, 0);
if (sock == -1) {
perror("Socket creation error: ");
throw std::runtime_error("Could not create socket!");
}
sockaddr_in local_addr;
local_addr.sin_port = input_port;
local_addr.sin_addr.s_addr = INADDR_ANY;
if (bind(sock, (const sockaddr*)&local_addr, sizeof(local_addr))) {
perror("Error: ");
throw std::runtime_error("Can't bind to port!");
}
sockaddr_in remote_addr;
remote_addr.sin_port = input_port;
if (!inet_aton(remote.c_str(), &remote_addr.sin_addr))
throw std::runtime_error("Can't parse remote IP address!");
std::cout << "TX to " << remote << "\n";
unsigned char recv_buf[40000];
if (first) {
std::cout << "First launched, waiting for hello.\n";
int bytes = recv(sock, &recv_buf, sizeof(recv_buf), 0);
std::cout << "Seen hello from my friend here: " << recv_buf << ".\n";
}
int count = 0;
for (;;)
{
std::chrono::high_resolution_clock::time_point start = std::chrono::high_resolution_clock::now();
if (sendto(sock, hello_string.c_str(), hello_string.size() + 1, 0, (const sockaddr*)&remote_addr, sizeof(remote_addr)) != hello_string.size() + 1) {
perror("Sendto error: ");
throw std::runtime_error("Error sending data");
}
std::chrono::high_resolution_clock::time_point end = std::chrono::high_resolution_clock::now();
std::cout << "Send time nanosecs " << std::chrono::duration_cast<std::chrono::nanoseconds>(end - start).count() << "\n";
int bytes = recv(sock, &recv_buf, sizeof(recv_buf), 0);
std::cout << "Seen hello from my friend here: " << recv_buf << ".\n";
// stop after loops
if (count++ > 10)
break;
}
}
catch (std::exception& e)
{
std::cerr << e.what() << std::endl;
}
}
int main(int argc, char* argv[])
{
test(argv[1], argv[2], *argv[3] == 'f');
return 0;
}
As expected, there is no delay. Here is output from one of the pairs (I run the code in pairs on two machines in the same network):
./socktest x.x.x.x 'ThingTwo' f
TX to x.x.x.x
First launched, waiting for hello.
Seen hello from my friend here: ThingOne.
Send time nanosecs 17726
Seen hello from my friend here: ThingOne.
Send time nanosecs 6479
Seen hello from my friend here: ThingOne.
Send time nanosecs 6362
Seen hello from my friend here: ThingOne.
Send time nanosecs 6048
Seen hello from my friend here: ThingOne.
Send time nanosecs 6246
Seen hello from my friend here: ThingOne.
Send time nanosecs 5691
Seen hello from my friend here: ThingOne.
Send time nanosecs 5665
Seen hello from my friend here: ThingOne.
Send time nanosecs 5930
Seen hello from my friend here: ThingOne.
Send time nanosecs 6082
Seen hello from my friend here: ThingOne.
Send time nanosecs 5493
Seen hello from my friend here: ThingOne.
Send time nanosecs 5893
Seen hello from my friend here: ThingOne.
Send time nanosecs 5597
It is good practice to segregate Tx and Rx ports. I derive my own socket class from CAsynchSocket as it has a message pump that sends a system message when data is received on your socket and yanks the OnReceive function (either yours if u overide the underlying virtual function or the default if you dont
Related
I have a server which uses a ZMQ_ROUTER to communicate with ZMQ_DEALER clients. I set the ZMQ_HEARTBEAT_IVL and ZMQ_HEARTBEAT_TTL options on the client socket to make the client and server ping pong each other. Beside, because of the ZMQ_HEARTBEAT_TTL option, the server will timeout the connection if it does not receive any pings from the client in a time period, according to zmq man page:
The ZMQ_HEARTBEAT_TTL option shall set the timeout on the remote peer for ZMTP heartbeats. If
this option is greater than 0, the remote side shall time out the connection if it does not
receive any more traffic within the TTL period. This option does not have any effect if
ZMQ_HEARTBEAT_IVL is not set or is 0. Internally, this value is rounded down to the nearest
decisecond, any value less than 100 will have no effect.
Therefore, what I expect the server to behave is that, when it does not receive any traffic from a client in a time period, it will close the connection to that client and discard all the messages in the outgoing queue after the linger time expires. I create a toy example to check if my hypothesis is correct and it turns out that it is not. The chain of events is as followed:
The server sends a bunch of data to the client.
The client receives and processes the data, which is slow.
All send commands return successfully.
While the client is still receiving the data, I unplug the internet cable.
After a few seconds (set by the ZMQ_HEARTBEAT_TTL option), the server starts sending FIN signals to the client, which are not being ACKed back.
The outgoing messages are not discarded (I check the memory consumption) even after a while. They are discarded only if I call zmq_close on the router socket.
So my question is, is this suppose to be how one should use the ZMQ heartbeat mechanism? If it is not then is there any solution for what I want to achieve? I figure that I can do heartbeat myself instead of using ZMQ's built in. However, even if I do, it seems that ZMQ does not provide a way to close a connection between a ZMQ_ROUTER and a ZMQ_DEALER, although that another version of ZMQ_ROUTER - ZMQ_STREAM provides a way to do this by sending an identity frame followed by an empty frame.
The toy example is below, any help would be thankful.
Server's side:
#include <zmq.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
int main(int argc, char **argv)
{
void *context = zmq_ctx_new();
void *router = zmq_socket(context, ZMQ_ROUTER);
int router_mandatory = 1;
zmq_setsockopt(router, ZMQ_ROUTER_MANDATORY, &router_mandatory, sizeof(router_mandatory));
int hwm = 0;
zmq_setsockopt(router, ZMQ_SNDHWM, &hwm, sizeof(hwm));
int linger = 3000;
zmq_setsockopt(router, ZMQ_LINGER, &linger, sizeof(linger));
char bind_addr[1024];
sprintf(bind_addr, "tcp://%s:%s", argv[1], argv[2]);
if (zmq_bind(router, bind_addr) == -1) {
perror("ERROR");
exit(1);
}
// Receive client identity (only 1)
zmq_msg_t identity;
zmq_msg_init(&identity);
zmq_msg_recv(&identity, router, 0);
zmq_msg_t dump;
zmq_msg_init(&dump);
zmq_msg_recv(&dump, router, 0);
printf("%s\n", (char *) zmq_msg_data(&dump)); // hello
zmq_msg_close(&dump);
char buff[1 << 16];
for (int i = 0; i < 50000; ++i) {
if (zmq_send(router, zmq_msg_data(&identity),
zmq_msg_size(&identity),
ZMQ_SNDMORE) == -1) {
perror("ERROR");
exit(1);
}
if (zmq_send(router, buff, 1 << 16, 0) == -1) {
perror("ERROR");
exit(1);
}
}
printf("OK IM DONE SENDING\n");
// All send commands have returned successfully
// While the client is still receiving data, I unplug the intenet cable on the client machine
// After a while, the server starts sending FIN signals
printf("SLEEP before closing\n"); // At this point, the messages are not discarded (memory usage is high).
getchar();
zmq_close(router);
zmq_ctx_destroy(context);
}
Client's side:
#include <zmq.h>
#include <stdlib.h>
#include <string.h>
int main(int argc, char **argv)
{
void *context = zmq_ctx_new();
void *dealer = zmq_socket(context, ZMQ_DEALER);
int heartbeat_ivl = 3000;
int heartbeat_timeout = 6000;
zmq_setsockopt(dealer, ZMQ_HEARTBEAT_IVL, &heartbeat_ivl, sizeof(heartbeat_ivl));
zmq_setsockopt(dealer, ZMQ_HEARTBEAT_TIMEOUT, &heartbeat_timeout, sizeof(heartbeat_timeout));
zmq_setsockopt(dealer, ZMQ_HEARTBEAT_TTL, &heartbeat_timeout, sizeof(heartbeat_timeout));
int hwm = 0;
zmq_setsockopt(dealer, ZMQ_RCVHWM, &hwm, sizeof(hwm));
char connect_addr[1024];
sprintf(connect_addr, "tcp://%s:%s", argv[1], argv[2]);
zmq_connect(dealer, connect_addr);
zmq_send(dealer, "hello", 6, 0);
size_t size = 0;
int i = 0;
while (size < (1ll << 16) * 50000) {
zmq_msg_t msg;
zmq_msg_init(&msg);
if (zmq_msg_recv(&msg, dealer, 0) == -1) {
perror("ERROR");
exit(1);
}
size += zmq_msg_size(&msg);
printf("i = %d, size = %ld, total = %ld\n", i, zmq_msg_size(&msg), size); // This causes the cliet to be slow
// Somewhere in this loop I unplug the internet cable.
// The client starts sending FIN signals as well as trying to reconnect. The recv command hangs forever.
zmq_msg_close(&msg);
++i;
}
zmq_close(dealer);
zmq_ctx_destroy(context);
}
PS: I know that setting the highwater mark to unlimited is bad practice, however I figure that the problem will be the same even if the highwater mark is low so let's ignore it for now.
I have written a device discovery program that can run in client or server mode. In client mode it sends a UDP broadcast packet to 255.255.255.255 on port 30000 and then listens for responses on port 30001. In server mode it listens for a UDP broadcast on port 30000 and sends a UDP broadcast packet to 255.255.255.255 on port 30001 in response.
When I run this program on 2 devices with IP addresses 192.168.10.61 and 192.168.10.62 it all works perfectly. The whole point of this program is to allow devices with unknown IP addresses to discover one another so long as they are connected to the same physical network. So to test that, I changed the IP address of the first device to something random like 12.34.56.42/255.255.240. And now it stops working.
Using tcpdump on the 192.168.10.62 machine I can see that the UDP packet from the 12.134.56.42 machine was received:
# tcpdump -i eth0 port 30000 -c 1 -v
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
17:38:02.552427 IP (tos 0x0, ttl 64, id 18835, offset 0, flags [DF], proto UDP (17), length 49)
12.34.56.42.56815 > 255.255.255.255.30000: UDP, length 21
1 packet captured
6 packets received by filter
0 packets dropped by kernel
but my discovery program no longer receives it. This is the code I am using to receive the UDP broadcast packets:
int WaitForPacket(uint16_t portNum, vector<char>& udpBuf, udp::endpoint& remoteEndpoint, const chrono::milliseconds timeout)
{
io_service ioService;
udp::socket socket(ioService, udp::endpoint(boost::asio::ip::address_v4::any(), portNum));
socket.set_option(socket_base::broadcast(true));
boost::system::error_code error;
int numBytes = receive_from(socket, buffer(udpBuf), remoteEndpoint, error, timeout);
if (error && error != error::message_size && error != error::timed_out)
{
printf("Got error: %s\n", error.message().c_str());
return -1;
}
return numBytes;
}
/*
* The boost asio library does not provide a blocking read with timeout function so we have to roll our own.
*/
int receive_from(
boost::asio::ip::udp::socket& socket,
const boost::asio::mutable_buffers_1& buf,
boost::asio::ip::udp::endpoint& remoteEndpoint,
boost::system::error_code& error,
chrono::milliseconds timeout)
{
volatile bool ioDone = false;
int numBytesReceived = 0;
boost::asio::io_service& ioService = socket.get_io_service();
socket.async_receive_from(buf, remoteEndpoint,
[&error, &ioDone, &numBytesReceived](const boost::system::error_code& errorAsync, size_t bytesReceived)
{
ioDone = true;
error = errorAsync;
numBytesReceived = bytesReceived;
});
this_thread::sleep_for(chrono::milliseconds(100));
ioService.reset();
ioService.poll_one();
auto endTime = chrono::system_clock::now() + timeout;
while (!ioDone)
{
ioService.reset();
this_thread::sleep_for(chrono::milliseconds(100));
auto now = chrono::system_clock::now();
if (now > endTime)
{
socket.cancel();
error = error::timed_out;
return 0;
}
ioService.poll_one();
}
ioService.reset();
return numBytesReceived;
}
I checked similar questions on Stack Overflow and found some that said the receiving socket has to be bound to INADDR_ANY in order to receive broadcast packets. I was originally creating the socket like this udp::socket socket(ioService, udp::endpoint(udp::v4(), portNum)); but have now changed it to use ip::address_v4::any() instead. That didn't make any difference.
Can anybody tell me what I need to change in order to receive the UDP broadcast packet as expected ?
I am running this on iMX6 devices running Linux and am compiling using boost 1.58.
I finally discovered why my code was never seeing the broadcast packets even though tcpdump proved that they were being received. After finding this StackOverflow question:
Disable reverse path filtering from Linux kernel space
it seems that I just needed to disable Reverse Path Filtering on both hosts like this:
echo 0 > /proc/sys/net/ipv4/conf/all/rp_filter
echo 0 > /proc/sys/net/ipv4/conf/eth0/rp_filter
and then my code worked perfectly with no modifications. Hopefully this will help other people wondering why they can't get UDP broadcasts to the network broadcast address (255.255.255.255) to work.
I have setup two Raspberry Pis to use UDP sockets, one as the client and one as the server. The kernel has been patched with RT-PREEMPT (4.9.43-rt30+). The client acts as an echo to the server to allow for the calculation of Round-Trip Latency (RTL). At the moment a send frequency of 10Hz is being used on the server side with 2 threads: one for sending the messages to the client and one for receiving the messages from the client. The threads are setup to have a schedule priority of 95 using Round-Robin scheduling.
The server constructs a message containing the time the message was sent and the time past since messages started being sent. This message is sent from the server to the client then immediately returned to the server. Upon receiving the message back from the client the server calculates the Round-Trip Latency and then stores it in a .txt file, to be used for plotting using Python.
The problem is that when analysing the graphs I noticed there is a periodic spike in the RTL. The top graph of the image:RTL latency and sendto() + recvfrom() times. In the legend I have used RTT instead of RTL. These spikes are directly related to the spikes shown in the server side sendto() and recvfrom() calls. Any suggestion on how to remove these spikes as my application is very reliant on consistency?
Things I have tried and noticed:
The size of the message being sent has no effect. I have tried larger messages (1024 bytes) and smaller messages (0 bytes) and the periodic delay does not change. This suggests to me that it is not a buffer issue as there is nothing filling up?
The frequency at which the messages are sent does play a big role, if the frequency is doubled then the latency spikes occur twice as often. This then suggests that something is filling up and while it empties the sendto()/recvfrom() functions experience a delay?
Changes to the buffer size with setsockop() has no effect.
I have tried quite a few other settings (MSG_DONTWAIT, etc) to no avail.
I am by no means an expert in sockets/C++ programming/Linux so any suggestions given will be greatly appreciated as I am out of ideas. Below is the code used to create the socket and start the server threads for sending and receiving the messages. Below that is the code for sending the messages from the server, if you need the rest please let me know but for now my concern is centred around the delay caused by the sendto() function. If you need anything else please let me know. Thanks.
thread_priority = priority;
recv_buff = recv_buff_len;
std::cout << del << " Second start-up delay..." << std::endl;
sleep(del);
std::cout << "Delay complete..." << std::endl;
master = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
// master socket creation
if(master == 0){// Try to create the UDP socket
perror("Could not create the socket: ");
exit(EXIT_FAILURE);
}
std::cout << "Master Socket Created..." << std::endl;
std::cout << "Adjusting send and receive buffers..." << std::endl;
setBuff();
// Server address and port creation
serv.sin_family = AF_INET;// Address family
serv.sin_addr.s_addr = INADDR_ANY;// Server IP address, INADDR_ANY will
work on the server side only
serv.sin_port = htons(portNum);
server_len = sizeof(serv);
// Binding of master socket to specified address and port
if (bind(master, (struct sockaddr *) &serv, sizeof (serv)) < 0) {
//Attempt to bind master socket to address
perror("Could not bind socket...");
exit(EXIT_FAILURE);
}
// Show what address and port is being used
char IP[INET_ADDRSTRLEN];
inet_ntop(AF_INET, &(serv.sin_addr), IP, INET_ADDRSTRLEN);// INADDR_ANY
allows all network interfaces so it will always show 0.0.0.0
std::cout << "Listening on port: " << htons(serv.sin_port) << ", and
address: " << IP << "..." << std::endl;
// Options specific to the server RPi
if(server){
std::cout << "Run Time: " << duration << " seconds." << std::endl;
client.sin_family = AF_INET;// Address family
inet_pton(AF_INET, clientIP.c_str(), &(client.sin_addr));
client.sin_port = htons(portNum);
client_len = sizeof(client);
serv_send = std::thread(&SocketServer::serverSend, this);
serv_send.detach();// The server send thread just runs continuously
serv_receive = std::thread(&SocketServer::serverReceive, this);
serv_receive.join();
}else{// Specific to client RPi
SocketServer::clientReceiveSend();
}
And the code for sending the messages:
// Setup the priority of this thread
param.sched_priority = thread_priority;
int result = sched_setscheduler(getpid(), SCHED_RR, ¶m);
if(result){
perror ("The following error occurred while setting serverSend() priority");
}
int ched = sched_getscheduler(getpid());
printf("serverSend() priority result %i : Scheduler priority id %i \n", result, ched);
std::ofstream Out;
std::ofstream Out1;
Out.open(file_name);
Out << duration << std::endl;
Out << frequency << std::endl;
Out << thread_priority << std::endl;
Out.close();
Out1.open("Server Side Send.txt");
packets_sent = 0;
Tbegin = std::chrono::high_resolution_clock::now();
// Send messages for a specified time period at a specified frequency
while(!stop){
// Setup the message to be sent
Tstart = std::chrono::high_resolution_clock::now();
TDEL = std::chrono::duration_cast< std::chrono::duration<double>>(Tstart - Tbegin); // Total time passed before sending message
memcpy(&message[0], &Tstart, sizeof(Tstart));// Send the time the message was sent with the message
memcpy(&message[8], &TDEL, sizeof(TDEL));// Send the time that had passed since Tstart
// Send the message to the client
T1 = std::chrono::high_resolution_clock::now();
sendto(master, &message, 16, MSG_DONTWAIT, (struct sockaddr *)&client, client_len);
T2 = std::chrono::high_resolution_clock::now();
T3 = std::chrono::duration_cast< std::chrono::duration<double>>(T2-T1);
Out1 << T3.count() << std::endl;
packets_sent++;
// Pause so that the required message send frequency is met
while(true){
Tend = std::chrono::high_resolution_clock::now();
Tdel = std::chrono::duration_cast< std::chrono::duration<double>>(Tend - Tstart);
if(Tdel.count() > 1/frequency){
break;
}
}
TDEL = std::chrono::duration_cast< std::chrono::duration<double>>(Tend - Tbegin);
// Check to see if the program has run as long as required
if(TDEL.count() > duration){
stop = true;
break;
}
}
std::cout << "Exiting serverSend() thread..." << std::endl;
// Save extra results to the end of the last file
Out.open(file_name, std::ios_base::app);
Out << packets_sent << "\t\t " << packets_returned << std::endl;
Out.close();
Out1.close();
std::cout << "^C to exit..." << std::endl;
I have sorted out the problem. It was not the ARP tables as even with the ARP functionality disabled there was a periodic spike. With the ARP functionality disabled there would only be a single spike in latency as opposed to a series of latency spikes.
It turned out to be a problem with the threads I was using as there were two threads on a CPU only capable of handling one thread at a time. The one thread that was sending the information was being affected by the second thread that was receiving information. I changed the thread priorities around a lot (send priority higher than receive, receive higher than send and send equal to receive) to no avail. I have now bought a Raspberry Pi that has 4 cores and I have set the send thread to run on core 2 while the receive thread runs on core 3, preventing the threads from interfering with each other. This has not only removed the latency spikes but also reduced the mean latency of my setup.
I'm using a MacBook Pro to send a series of 1024-byte UDP datagrams in my main thread over a socket using a multicast address and port every 12 mS (ugly but illustrative):
for(;;) {
//-------------- Send ----------------
try {
sock.sendTo(filebufPos, readsize, mcAddr, mcPort);
sendCount++;
if(sendCount < sends_needed) {
filebufPos += readsize;
} else {
sendCount = 0; //Reset send counter
filebufPos = filebuf; //Reset pointer to start of file buffer
}
} catch (SocketException &e) {
cerr << e.what() << endl;
}
usleep(12000); //------------ Pause between sends-----------
}
On my iPhone 5, I try to receive the datagrams using a non-blocking 'recvFrom' call on the same multicast address and port within a callback routine that gets called every 1.5 mS, ala:
try {
nBytesReceived = sock->recvFrom((void *)buf, nBytesCount, mcAddr, mcPort);
} catch (SocketException &e) {
cerr << e.what() << endl;
}
I measure the time between successful UDP socket recvs on the iPhone client side. Ideally, I should receive the UDP datagrams every 8 callbacks (12 ms), and for the most part this is the case. However, sometimes the time between recvs is very short, while at other times it can be as long as 100-150 mS between recvs.
Any ideas why this might be happening?
Thanks!
I have a winsock-server, accepting packets from a local IP, which currently works without using IOCP. I want it to be non-blocking though, working through IOCP. Yes I know about the alternatives (select, WSAAsync etc.), but this won't do it for developing an MMO server.
So here's the question - how do I do this using std::thread and IOCP?
I already know that GetQueuedCompletionStatus() dequeues packets, while PostQueuedCompletionStatus() queues those to the IOCP.
Is this the proper way to do it async though?
How can I threat all clients equally on about 10 threads? I thought about receiving UDP packets and processing those while IOCP has something in queue, but packets will be processed by max 10 at a time and I also have an infinite loop in each thread.
The target is creating a game server, capable of holding thousands of clients at the same time.
About the code: netListener() is a class, holding packets received from the listening network interface in a vector. All it does in Receive() is
WSARecvFrom(sockfd, &buffer, 1, &bytesRecv, &flags, (SOCKADDR*)&senderAddr, &size, &overl, 0);
std::cout << "\n\nReceived " << bytesRecv << " bytes.\n" << "Packet [" << std::string(buffer.buf, bytesRecv)<< "]\n";*
The code works, buffer shows what I've sent to myself, but I'm not sure whether having only ONE receive() will suffice.
About blocking - yes, I realized that putting listener.Receive() into a separate thread doesn't block the main thread. But imagine this - lots of clients try to send packets, can one receive process them all? Not to mention I was planning to queue an IOCP packet on each receive, but still not sure how to do this properly.
And another question - is it possible to establish a direct connection between a client and another client? If you host a server on a local machine behind NAT and you want it to be accessible from the internet, for example.
Threads:
void Host::threadFunc(int i) {
threadMutex.lock();
for (;;) {
if (m_Init) {
if (GetQueuedCompletionStatus(iocp, &bytesReceived, &completionKey, (LPOVERLAPPED*)&overl, WSA_INFINITE)) {
std::cout << "1 completion packet dequeued, bytes: " << bytesReceived << std::endl;
}
}
}
threadMutex.unlock(); }
void Host::createThreads() {
//Create threads
for (unsigned int i = 0; i < SystemInfo.dwNumberOfProcessors; ++i) {
threads.push_back(std::thread(&Host::threadFunc, this, i));
if (threads[i].joinable()) threads[i].detach();
}
std::cout << "Threads created: " << threads.size() << std::endl; }
Host
Host::Host() {
using namespace std;
InitWSA();
createThreads();
m_Init = true;
SecureZeroMemory((PVOID)&overl, sizeof(WSAOVERLAPPED));
overl.hEvent = WSACreateEvent();
iocp = CreateIoCompletionPort((HANDLE)sockfd, iocp, 0, threads.size());
listener = netListener(sockfd, overl, 12); //12 bytes buffer size
for (int i = 0; i < 4; ++i) { //IOCP queue test
if (PostQueuedCompletionStatus(iocp, 150, completionKey, &overl)) {
std::cout << "1 completion packet queued\n";
}
}
std::cin.get();
listener.Receive(); //Packet receive test - adds a completion packet n bytes long if client sent one
std::cin.get();}