Timeout for dropped packets (UDP) - c++

I'm trying to create a timeout using select() for UDP socket transfer. I want to send an int from client to server, wait 300ms, and if I don't get an ACK, resend the packet. I'm not sure how to set this up properly with the timeout. From what I've gathered online and on the notes I have from class, select should be used on the receiving end.
the client at the server send back and forth the numbers 1-100. I have a separate router simulated code that randomly drops packets
Here is the code i have for the client side
int sent = 1;
int received = 1;
for (int i = 0; i < 100; i++)
{
string sent1 = to_string(sent);
char const *pchar = sent1.c_str();
if(!sendto(s, pchar, sizeof(pchar), 0, (struct sockaddr*) &sa_in, sizeof(sa_in)))
cout << "send NOT successful\n";
else
{
cout << "Client sent " << sent << endl;
sent++;
}
// receive
fd_set readfds; //fd_set is a type
FD_ZERO(&readfds); //initialize
FD_SET(s, &readfds); //put the socket in the set
if(!(outfds = select (1 , &readfds, NULL, NULL, & timeouts)))
break;
if (outfds == 1) //receive frame
{
if (!recvfrom(s, buffer2, sizeof(buffer2), 0, (struct sockaddr*) &client, &client_length))
cout << "receive NOT successful\n";
else
{
received = atoi(buffer2);
cout << "Client received " << received << endl;
received++;
}
}
}
The code is identical for the receiving side except it is in reverse: receive first, then send
My code doesn't utilize the timeout at all. This is basically what I want to do:
send packet(N)
if (timeout)
resend packet(N)
else
send packet(N+1)

If the receiver gets a timeout it needs to tell the sender, or else not tell the sender. In other words you have to implement either a NACK-based protocol or an ACK-based protocol.

Related

Trying to connect() multiple times in TCP

I'm writing a client/server application where the client and server should send data to each other via a TCP socket. The client should connect to the server and if the connection fails, it should wait a few seconds and then try again to connect to it (up to a certain number of tries).
This is the code I currently have:
const int i_TRIES = 5;
time_t t_timeout = 3000;
int i_port = 5678;
int i_socket;
string s_IP = "127.0.0.1";
for(int i = 0; i < i_TRIES; i++)
{
if((i_socket = socket(AF_INET, SOCK_STREAM, 0)) < 0)
{
cout << "[Client]: Socket creation failed." << endl;
exit(EXIT_FAILURE);
}
memset(&server_address, '0', sizeof(server_address));
server_address.sin_family = AF_INET;
server_address.sin_port = htons(i_port);
if(inet_pton(AF_INET, s_IP.c_str(), &server_address.sin_addr) <= 0)
{
cout << "[Client]: Invalid IP address." << endl;
exit(EXIT_FAILURE);
}
if(connect(i_socket, (struct sockaddr *)&server_address, sizeof(server_address)) < 0)
{
if(i < i_TRIES - 2)
{
cout << "[Client]: Connection to server failed. Trying again in " << t_timeout << " ms." << endl;
close(i_socket);
sleep(t_timeout);
}
else
{
cout << "[Client]: Could not connect to server, exiting." << endl;
exit(EXIT_FAILURE);
}
}
else
{
cout << "[Client]: Successfully connected to server." << endl;
break;
}
}
// do stuff with socket
The issue I'm having is that the first call to connect() works as expected, it fails if there's no server and then the loop repeats, however, the second time connect() blocks forever (or at least for much longer than I want it to). Initially, my loop was just around the connect() if block (code below), and this also caused the same problem. After that I included the whole socket setup (the code above) in the loop, but that also didn't help. I also tried closing the socket after a failed connection, but this didn't help either.
Initial for loop:
// other stuff from above here
for(int i = 0; i < i_TRIES; i++)
{
if(connect(i_socket, (struct sockaddr *)&server_address, sizeof(server_address)) < 0)
{
if(i < i_TRIES - 2)
{
cout << "[Client]: Connection to server failed. Trying again in " << t_timeout << " ms." << endl;
sleep(t_timeout);
}
else
{
cout << "[Client]: Could not connect to server, exiting." << endl;
exit(EXIT_FAILURE);
}
}
else
{
cout << "[Client]: Successfully connected to server." << endl;
break;
}
}
// do stuff with socket
Can I force connect() to return after a certain amount of time has passed? Or is there a way to get the connect() function to try multiple times on it's own? Or is there something I need to do to the socket to reset everything before I can try again? I hope this isn't a dumb question, I couldn't find any information about how to connect multiple times.
Thanks in advance!
Can I force connect() to return after a certain amount of time has passed?
No. You must put the socket into non-blocking mode and then use select() or (e)poll() to provide timeout logic while you wait for the socket to connect. If the connection fails, or takes too long to connect, close the socket, create a new one, and try again.
Or is there a way to get the connect() function to try multiple times on it's own?
No. It can perform only 1 connection attempt per call.
Or is there something I need to do to the socket to reset everything before I can try again?
There is no guarantee that you can even call connect() multiple times on the same socket. On some platforms, you must destroy the socket and create a new socket before you call connect() again. You should get in the habit of doing that for all platforms.
Put the socket into non-blocking mode and use select() to implement the timeout. Select for writeability on the socket. Note that you can decrease the platform connect timeout by this means, but not increase it.
The sleep() is pointless, just literally a waste of time.

Why does connecting to server with 2nd socket while using select() "break" first connection?

I'm following the tutorial (big code block near the bottom of that section)
here:http://beej.us/guide/bgnet/output/html/multipage/advanced.html#select
And the main server code code is like so:
while (true)
{
read_fds = master;
if (select(fd_max + 1, &read_fds, NULL, NULL, NULL) == -1)
{
cerr << "ERROR. Select failed" << endl;
return -1;
}
for (int i = 0; i <= fd_max; i++)
{
if (FD_ISSET(i, &read_fds))
{
if (i == welcome_socket)
{
cout << "NEW CONNECTION" << endl;
client_len = sizeof(struct sockaddr_in);
client_sock = accept(welcome_socket, (struct sockaddr *) &client_addr, &client_len);
if (client_sock != -1)
{
FD_SET(client_sock, &master);
if (client_sock > fd_max)
{
fd_max = client_sock;
}
}
}
else
{
int length, total_read = 0;
// CONNECTION CLOSED BY CLIENT
if (safe_recv(client_sock, &length, sizeof(int)) <= 0)
{
cout << "CONNECTION CLOSED" << endl;
close(i);
FD_CLR(i, &master);
}
else
{
char *message = (char *)memset((char *)malloc(length + 1), 0, length);
// while ((total_read += safe_recv(client_sock, message + total_read, length - total_read)) < length) {}
safe_recv(client_sock, message, length);
// RESPOND WITH MESSAGE
cout << "MESSAGE: " << message << endl;
write(client_sock, process(message), length);
free(message);
}
}
}
}
}
What I'm doing is first sending (from the client) the length of the string, then the string itself. Then the server sends back process(message).
When I only have 1 connection, I'm seeing correct behaviour. However if 1 is connected already and I connect a new client, what I'm seeing is:
1st client no longer sends or receives anything from server (concluded because nothing is printed to stdout on client side)
2nd client is working as expected
When 2nd connection exits, server counts that as both connections exiting (prints CONNECTION CLOSED twice)
I've tried to keep this very similar to the tutorial code. I've run the tutorial server, and that works as intended with several clients.
I'm new to network programming, so I apologise if this is a beginner problem or just something dumb I overlooked.
The code reads from and writes to only client_sock, and client_sock is replaced with the new socket in the accept handling portion of the code.
Most likely you want to interact with i rather than client_sock.

not getting a Winsock error when ethernet unplugged

I'm working on a client application that sends sensor data one way to a remote server. After the initial login there is no return data from the server. My problem is when the ethernet is disconnected such as a hard disconnect i.e. wireless link goes down, my application does not get a error return value after attempting a 'send' call. I am using a single non-blocking socket instance. The thread checks for a 'recv' each loop using 'select'. It does eventually get an error on 'recv' but never on 'send'.
When the remote PC has a internet connectivity loss it will cause the program to be disconnected from the server for minutes to hours before it recognises the connection loss happened and switches to re-login the server. What can be done to help detect the hard disconnect?
void checkConnect(NTRIP& server)
{
//1st check for recv or gracefully closed socket
char databuf[SERIAL_BUFFERSIZE];
fd_set Reader, Writer, Err;
TIMEVAL Timeout;
Timeout.tv_sec = 1; // timeout after 1 seconds
Timeout.tv_usec = 0;
FD_ZERO(&Reader);
FD_ZERO(&Err);
FD_SET(server.socket, &Reader);
FD_SET(server.socket, &Err);
int iResult = select(0, &Reader, NULL, &Err, &Timeout);
if(iResult > 0)
{
if(FD_ISSET(server.socket, &Reader) )
{
int recvBytes = recv(server.socket, databuf, sizeof(databuf), 0);
if(recvBytes == SOCKET_ERROR)
{
cout << "socket error on receive call from server " << WSAGetLastError() << endl;
closesocket(server.socket);
server.connected_IP = false;
}
else if(recvBytes == 0)
{
cout << "server closed the connection gracefully" << endl;
closesocket(server.socket);
server.connected_IP = false;
}
else //>0 bytes were received so read data if needed
{
}
}
if(FD_ISSET(server.socket, &Err))
{
cout << "select returned socket in error state" << endl;
closesocket(server.socket);
server.connected_IP = false;
}
}
else if(iResult == SOCKET_ERROR)
{
cout << "ip thread select socket error " << WSAGetLastError() << endl;
closesocket(server.socket);
server.connected_IP = false;
}
//2nd check hard disconnect if no other data has been sent recently
if(server.connected_IP == true && getTimePrecise() - server.lastDataSendTime > 5.0)
{
char buf1[] = "hello";
cout << "checking send for error" << endl;
iResult = send(server_main.socket, buf1, sizeof(buf1), 0);
if(iResult == SOCKET_ERROR)
{
int lasterror = WSAGetLastError();
if(lasterror == WSAEWOULDBLOCK)
{
cout << "server send WSAEWOULDBLOCK" << endl;
}
if(lasterror != WSAEWOULDBLOCK)
{
cout << "server testing connection send function error " << lasterror << endl;
closesocket(server.socket);
server.connected_IP = false;
}
}
else
{
cout << "sent out " << iResult << " bytes" << endl;
}
server.lastDataSendTime = getTimePrecise();
}
}
It is not possible to detect disconnect until you try to send something.
The solution for you is the following:
You detect that you have received no data for a certain period of time and you want to check is the connection is alive.
You send some data to the server using send function. It could be protocol-specific ping packet or either garbage. The send function returns immediately, because it does not wait for actual data send. It only fills internal send buffer.
You begin waiting for socket read.
While you are waiting, OS tries to send the data in the send buffer to the server.
When OS detects that it cannot deliver data to the server, then the connection is marked as erroneous.
Now you will get an error when calling recv and send functions.
The send timeout is system specific and can be configured. Usually, it is about 20 seconds (Linux) - 2 minutes (Windows). It means that you need to wait a lot before you receive an error.
Notes:
You can also turn on TCP keep alive mechanism, but I don't recommend you to do this.
You can also modify TCP timeout intervals. It can be helpful when you want the connection to survive the temporary network disconnect.
That's how TCP works and is intended to work. You will get an error from a subsequent send, but never from the first send after the disconnect. There is buffering, and retry, and retry timeout to overcome before an error is signalled.

"Connection was broken" error with UDT (UDP-based data transfer protocol)

I am programming a real-time game in which I need reliable UDP, so I've chosen to work with UDT (UDP-based data transfer protocol - http://sourceforge.net/projects/udt/).
The clients (on browsers) send real-time messages to my server via CGI scripts. The problem is that there are some messages that are being lost, and I don't know why because the server says that it sent all the messages successfully to the corresponding clients, but sometimes the client doesn't receive the message.
In my debug file, I've found that when a message is not received by the client, its script says:
error in recv();
recv: Connection was broken.
I would like to get some help on how the server shall know if the client got its message; should I send a NACK or something from the client side? I thought that UDT should do that for me. Can someone clarify this situation?
The relevant sections of the communication parts of my code are bellow, with some comments:
server's relevant code:
//...
void send_msg_in(player cur, char* xml){
/*this function stores the current message, xml, in a queue if xml!=NULL, and sends the 1st message of the queue to the client*/
/*this function is called when the player connects with the entering xml=NULL to get the 1st message of the queue,
or with xml!=NULL when a new message arrives: in this case the message is stored in the queue, and then the message will be sent in the appropriate time, i.e. the messages are ordered.*/
char* msg_ptr=NULL;
if (xml!=NULL){ //add the message to a queue (FIFO), the cur.xml_msgs
msg_ptr=(char*) calloc(strlen(xml)+1, sizeof(char));
strcpy(msg_ptr, xml);
(*(cur.xml_msgs)).push(msg_ptr);
} //get the 1st message of the queue
if (!(*(cur.xml_msgs)).empty()){
xml=(*(cur.xml_msgs)).front();
}
if (cur.get_udt_socket_in()!=NULL){
UDTSOCKET cur_udt = *(cur.get_udt_socket_in());
// cout << "send_msg_in(), cur_udt: " << cur_udt << endl;
//send the "xml", i.e. the 1st message of the queue...
if (UDT::ERROR == UDT::send(cur_udt, xml, strlen(xml)+1, 0)){
UDT::close(cur_udt);
cur.set_udt_socket_in(NULL);
}
else{ //if no error this else is reached
cout << "TO client:\n" << xml << "\n"; /*if there is no error,
i.e. on success, the server prints the message that was sent.*/
// / \
// /_!_\
/*the problem is that
the messages that are lost don't appear on the client side,
but they appear here on the server! */
if (((string) xml).find("<ack.>")==string::npos){
UDT::close(cur_udt);
cur.set_udt_socket_in(NULL); //close the socket
}
(*(cur.xml_msgs)).pop();
}
}
}
//...
client's relevant code:
//...
#define MSGBUFSIZE 1024
char msgbuf[MSGBUFSIZE];
UDTSOCKET client;
ofstream myfile;
//...
main(int argc, char *argv[]){
//...
// connect to the server, implict bind
if (UDT::ERROR == UDT::connect(client, (sockaddr*)&serv_addr, sizeof(serv_addr))){
cout << "error in connect();" << endl;
return 0;
}
myfile.open("./log.txt", ios::app);
send(xml);
char* cur_xml;
do{
cur_xml = receive(); //wait for an ACK or a new message...
myfile << cur_xml << endl << endl; // / \
/* /_!_\ the lost messages don't appear on the website
neither on this log file.*/
} while (((string) cur_xml).find("<ack.>")!=string::npos);
cout << cur_xml << endl;
myfile.close();
UDT::close(client);
return 0;
}
char* receive(){
if (UDT::ERROR == UDT::recv(client, msgbuf, MSGBUFSIZE, 0)){
// / \
/* /_!_\ when a message is not well received
this code is usually reached, and an error is printed.*/
cout << "error in recv();" << endl;
myfile << "error in recv();" << endl;
myfile << "recv: " << UDT::getlasterror().getErrorMessage() << endl << endl;
return 0;
}
return msgbuf;
}
void* send(string xml){
if (UDT::ERROR == UDT::send(client, xml.c_str(), strlen(xml.c_str())+1, 0)){
cout << "error in send();" << endl;
myfile << "error in send();" << endl;
myfile << "send: " << UDT::getlasterror().getErrorMessage() << endl << endl;
return 0;
}
}
Thank you for any help!
PS. I tried to increase the linger time on close(), after finding the link http://udt.sourceforge.net/udt4/doc/opt.htm, adding the following to the server's code:
struct linger l;
l.l_onoff = 1;
l.l_linger = ...; //a huge value in seconds...
UDT::setsockopt(*udt_socket_ptr, 0, UDT_LINGER, &l, sizeof(l));
but the problem is still the same...
PPS. the other parts of the communication in the server side are: (note: it seams for me that they are not so relevant)
main(int argc, char *argv[]){
char msgbuf[MSGBUFSIZE];
UDTSOCKET serv = UDT::socket(AF_INET, SOCK_STREAM, 0);
sockaddr_in my_addr;
my_addr.sin_family = AF_INET;
my_addr.sin_port = htons(PORT);
my_addr.sin_addr.s_addr = INADDR_ANY;
memset(&(my_addr.sin_zero), '\0', sizeof(my_addr.sin_zero));
if (UDT::ERROR == UDT::bind(serv, (sockaddr*)&my_addr, sizeof(my_addr))){
cout << "error in bind();";
return 0;
}
UDT::listen(serv, 1);
int namelen;
sockaddr_in their_addr;
while (true){
UDTSOCKET recver = UDT::accept(serv, (sockaddr*)&their_addr, &namelen);
if (UDT::ERROR == UDT::recv(recver, msgbuf, MSGBUFSIZE, 0)){
//this recv() function is called only once for each aqccept(), because the clients call CGI scripts via a browser, they need to call a new CGI script with a new UDT socket for each request (this in in agreement to the clients' code presented before).
cout << "error in recv();" << endl;
}
char* player_xml = (char*) &msgbuf;
cur_result = process_request((char*) &msgbuf, &recver, verbose); //ACK
}
}
struct result process_request(char* xml, UDTSOCKET* udt_socket_ptr, bool verbose){
//parse the XML...
//...
player* cur_ptr = get_player(me); //searches in a vector of player, according to the string "me" of the XML parsing.
UDTSOCKET* udt_ptr = (UDTSOCKET*) calloc(1, sizeof(UDTSOCKET));
memcpy(udt_ptr, udt_socket_ptr, sizeof(UDTSOCKET));
if (cur_ptr==NULL){
//register the player:
player* this_player = (player*) calloc(1, sizeof(player));
//...
}
}
else if (strcmp(request_type.c_str(), "info_waitformsg")==0){
if (udt_ptr!=NULL){
cur_ptr->set_udt_socket_in(udt_ptr);
if (!(*(cur_ptr->xml_msgs)).empty()){
send_msg_in(*cur_ptr, NULL, true);
}
}
}
else{ //messages that get instant response from the server.
if (udt_ptr!=NULL){
cur_ptr->set_udt_socket_out(udt_ptr);
}
if (strcmp(request_type.c_str(), "info_chat")==0){
info_chat cur_info;
to_object(&cur_info, me, request_type, msg_ptr); //convert the XML string values to a struct
process_chat_msg(cur_info, xml);
}
/* else if (...){ //other types of messages...
}*/
}
}
void process_chat_msg(info_chat cur_info, char* xml_in){
player* player_ptr=get_player(cur_info.me);
if (player_ptr){
int i=search_in_matches(matches, cur_info.match_ID);
if (i>=0){
match* cur_match=matches[i];
vector<player*> players_in = cur_match->followers;
int n=players_in.size();
for (int i=0; i<n; i++){
if (players_in[i]!=msg_owner){
send_msg_in(*(players_in[i]), xml, flag);
}
}
}
}
}
Looking at the UDT source code at http://sourceforge.net/p/udt/git/ci/master/tree/udt4/src/core.cpp, the error message "Connection was broken" is produced when either of the Boolean flags m_bBroken or m_bClosing is true and there is no data in the receive buffer.
Those flags are set in just a few cases:
In sections of code marked "should not happen; attack or bug" (unlikely)
In deliberate close or shutdown actions (don't see this happening in your code)
In expiration of a timer that checks for peer activity (the likely culprit)
In that source file at line 2593 it says:
// Connection is broken.
// UDT does not signal any information about this instead of to stop quietly.
// Application will detect this when it calls any UDT methods next time.
//
m_bClosing = true;
m_bBroken = true;
// ...[code omitted]...
// app can call any UDT API to learn the connection_broken error
Looking at the send() call, I don't see anywhere that it waits for an ACK or NAK from the peer before returning, so I don't think a successful return from send() on the server side is indicative of successful receipt of the message by the client.
You didn't show the code on the server side that binds to the socket and listens for responses from the client; if the problem is there then the server might be happily sending messages and never listening to the client that is trying to respond.
UDP is not a guaranteed-transmission protocol. A host will send a message, but if the recipient does not receive it, or if it is not received properly, no error will be raised. Therefore, it is commonly used in applications that require speed over perfect delivery, such as games. TCP does guarantee delivery, because it requires that a connection be set up first, and each message is acknowledged by the client.
I would encourage you to think about whether you actually need guaranteed receipt of that data, and, if you do, consider using TCP.

SFML TCP packet receive

I send a packet as client to server and I want to the server sends that packet forward to all client, here is the code:
#include <iostream>
#include <SFML/Network.hpp>
using namespace std;
int main()
{
int fromID; // receive data from 'fromID'
int Message; // fromID's message
sf::SocketTCP Listener;
if (!Listener.Listen(4567))
return 1;
// Create a selector for handling several sockets (the listener + the socket associated to each client)
sf::SelectorTCP Selector;
Selector.Add(Listener);
while (true)
{
unsigned int NbSockets = Selector.Wait();
for (unsigned int i = 0; i < NbSockets; ++i)
{
// Get the current socket
sf::SocketTCP Socket = Selector.GetSocketReady(i);
if (Socket == Listener)
{
// If the listening socket is ready, it means that we can accept a new connection
sf::IPAddress Address;
sf::SocketTCP Client;
Listener.Accept(Client, &Address);
cout << "Client connected ! (" << Address << ")" << endl;
// Add it to the selector
Selector.Add(Client);
}
else
{
// Else, it is a client socket so we can read the data he sent
sf::Packet Packet;
if (Socket.Receive(Packet) == sf::Socket::Done)
{
// Extract the message and display it
Packet >> Message;
Packet >> fromID;
cout << Message << " From: " << fromID << endl;
//send the message to all clients
for(unsigned int j = 0; j < NbSockets; ++j)
{
sf::SocketTCP Socket2 = Selector.GetSocketReady(j);
sf::Packet SendPacket;
SendPacket << Message;
if(Socket2.Send(SendPacket) != sf::Socket::Done)
cout << "Error sending message to all clients" << endl;
}
}
else
{
// Error : we'd better remove the socket from the selector
Selector.Remove(Socket);
}
}
}
}
return 0;
}
Client code:
in Player class I have this function :
void Player::ReceiveData()
{
int mess;
sf::Packet Packet;
if(Client.Receive(Packet) == sf::Socket::Done)
{
Client.Receive(Packet);
Packet >> mess;
cout << mess << endl;
}
}
main.cpp:
Player player;
player.Initialize();
player.LoadContent();
player.Connect();
..
..
//GAME LOOP
while(running==true)
{
sf::Event Event;
while(..) // EVENT LOOP
{
...
}
player.Update(Window);
player.ReceiveData();
player.Draw(Window);
}
When I run this client code, the program not responding, freezes.
The problem is with that ReceiveDate() function.
All sockets, even the one created by SFML, are by default blocking. This means that when you try to receive when there is nothing to receive, the call will block, making your application seem "freezed".
You can toggle the blocking status of a SFML socket with the sf::SocketTCP::SetBlocking function.
The problem with sending to all clients failing is because you use GetSocketReady to get the clients to send to. That function only returns a socket for clients that are ready (i.e. the previous call to Wait marked the socket as having input).
You need to refactor the server to keep track of the connected clients in another way. The common way is to reset and recreate the selector every time in the outer loop, and have a separate collection of the connected clients (e.g. a std::vector).