TCP/Ip network communication in c++ - c++

I am trying to write a threaded function that sends system information via Tcp/ip over the local network to another computer. I have been using sockets to achieve this and this has worked out quite allright thus far. But I am now at a point where this usually works but around 30% of the time I get error messages telling me that the socket can not be opened. I use the activeSocket library for the sockets.
#include "tbb/tick_count.h"
#include "ActiveSocket.h"
using namespace std;
CActiveSocket socket;
extern int hardwareStatus;
int establishTCP() {
char time[11];
int communicationFailed = 0;
memset(&time, 0, 11);
socket.Initialize();
socket.SetConnectTimeout(0, 20);
socket.SetSendTimeout(0, 20);
return communicationFailed;
}
int monitor() {
cout << "Monitor: init continious monitoring" << endl;
int communicationFailed;
tbb::tick_count monitorCounter = tbb::tick_count::now();
while (!closeProgram) {
tbb::tick_count currentTick = tbb::tick_count::now();
tbb::tick_count::interval_t interval;
interval = currentTick - monitorCounter;
if (interval.seconds() > 2) {
monitorCounter = tbb::tick_count::now();
communicationFailed = 1;
char buffer[256];
sprintf(buffer, "%d;", hardwareStatus);
establishTCP();
char *charip = new char[monitoringIP.size() + 1];
charip[monitoringIP.size()] = 0;
memcpy(charip, monitoringIP.c_str(), monitoringIP.size());
const uint8* realip = (const uint8 *) charip;
int monitorCount = 0;
cout << "Monitor: " << buffer << endl;
while (communicationFailed == 1 && monitorCount < 2) {
monitorCount++;
if (socket.Open(realip, 2417)) {
if (socket.Send((const uint8 *) buffer, strlen(buffer))) {
cout << "Monitor: Succeeded sending data" << endl;
communicationFailed = 0;
socket.Close();
} else {
socket.Close();
communicationFailed = 1;
cout << "Monitor: FAILED TO SEND DATA" << endl;
}
} else {
socket.Close();
communicationFailed = 1;
cout << "Monitor: FAILED TO OPEN SOCKET FOR DATA" << endl;
}
}
if (monitorCount == 2) cout << "Monitor: UNABLE TO SEND DATA" << endl;
}
}
return communicationFailed;
}
I think I am doing something wrong with these functions and that the problem is not on the other side of the line where this data is received. Can anyone see any obvious mistakes in this code that could cause the failure? I keep getting my own cout message "Monitor: FAILED TO OPEN SOCKET FOR DATA"
EDIT: With telnet everything works fine, 100% of the time

You can use netstat to check that the server is listening on the port and connections are being established. Snoop is another good application in your Armour for finding out what is going wrong. Another possibility is to use telnet to see if the client can connect to that IP address and port. As to the code I will take a look at it later to see if something has gone awry.

socket is a global variable. It might be re-used concurrently between two threads or sequentially inside one thread. In fact, the while(~closeProgram) loop indicates that you intend to use it sequentially.
Some documentation for CActiveSocket::Open reads: "Connection-based protocol sockets (CSocket::SocketTypeTcp) may successfully call Open() only once..."
Perhaps your program fails when you call .Open() twice on the same object.

I eventually found out the problem with my code. As the connection was unstable and working for 70% of the time it seemed to be a timeout issue. I removed the two timeout settings
socket.SetConnectTimeout(0, 20);
socket.SetSendTimeout(0, 20);
Now it works perfectly fine, thanks for the troubleshooting tips though!

Related

Winsock2 only create the same socket

I'm working on a server/client chat room for school.
Everything worked fine until I decided to get inputs of my Clients. Since then, I don't know why but every time I create a new socket, it will always be the same. I've tried to create a new solution, and just create a socket and even their.. sockets are the same.
My test code as simple as that:
#pragma comment (lib, "Ws2_32.lib")
#include <WinSock2.h>
#include <iostream>
int main()
{
WSADATA wsaData;
if (WSAStartup(MAKEWORD(2, 2), &wsaData))
{
return 1;
}
// Create Socket
SOCKET sock = socket(AF_INET, SOCK_STREAM, 0);
char a;
std::cout << "Socket : " << sock << std::endl;
std::cin >> a;
}
I end up with 4 times the same socket is created:
Some time, weirdly it works completely fine, but shortly after it gets back to that.
Edit:
To talk more about the project, I'm not sure how to explain the code without sending 300 lines which I assume isn't the best idea.
So here is my update for the server (yes we're using Polls cause my teacher don't want us to use multithreading for now).
void Server::Update()
{
do
{
WSAPoll(fds_, MAX_CLIENTS, -1);
for (int i = 0; i < MAX_CLIENTS; ++i)
{
if (fds_[i].revents & POLLRDNORM)
{
if (i == 0)
{
// Accept
AcceptClient(sock_);
}
else
{
// Receive
ReceiveMsg(fds_[i].fd, receiveBuffer);
// Send the message to all clients except the sender
for (int j = 0; j < clients_.size(); ++j)
{
if (clients_[j].socket != fds_[i].fd)
{
SendMsg(clients_[j].socket, receiveBuffer);
}
}
}
}
if (fds_[i].revents & POLLHUP)
{
closesocket(clients_[i].socket);
std::cout << "Client with socket " << clients_[i].socket << " disconnected" << std::endl;
clients_.erase(clients_.begin() + i);
}
}
} while (true);
}
Here is the Accept code:
void Server::AcceptClient(SOCKET sock)
{
// Client Socket
SOCKET csock;
SOCKADDR_IN csin;
int crecsize = sizeof(csin);
// Address Buffer
char adressBuffer[65];
csock = accept(sock, (SOCKADDR*)&csin, &crecsize);
if (csock != INVALID_SOCKET)
{
std::cout
<< "Client with socket " << csock
<< " connected from " << inet_ntop(AF_INET, &csin.sin_addr, adressBuffer, sizeof(adressBuffer))
<< ":" << csin.sin_port << std::endl;
clients_.push_back(Client(csock));
fds_[clients_.size()].fd = csock;
fds_[clients_.size()].events = POLLIN;
}
else
{
printError(WSAGetLastError(), __LINE__, __FILE__);
return;
}
}
But, by using WSAGetLastError() I know that the error occur client side during the call of connect():
void NetworkClient::ConnectToServer(SOCKET sock, SOCKADDR_IN sin, int recsize)
{
int sock_err = connect(sock, (SOCKADDR*)&sin, recsize);
if (sock_err != INVALID_SOCKET)
{
std::cout << "Connexion avec le serveur reussie" << std::endl;
}
else
{
printError(WSAGetLastError(), __LINE__, __FILE__);
return;
}
}
So I still end up with the same error, even though my socket is non-blocking.
Unlike on other platforms, where sockets are indexes into a per-process file table, sockets on Windows are kernel objects. When a process exits, any open objects are released automatically, allowing the kernel to reuse them. This is perfectly normal behavior.
UPDATE:
But, by using WSAGetLastError() I know that the error occur client side during the call of connect()
The error code you have shown is 10035 (WSAEWOULDBLOCK), which is normal behavior for a non-blocking connect(). It is NOT an error condition, so don't treat it like one. It simply means the connection operation is in progress. WSAPoll() (or select(), etc) will tell you at a later time when the operation is actually finished, and whether it was successful or not (in your case, the connection is successful, as evident by your server log). This is explained in the connect() documentation:
For connection-oriented, nonblocking sockets, it is often not possible to complete the connection immediately. In such a case, this function returns the error WSAEWOULDBLOCK. However, the operation proceeds.
When the success or failure outcome becomes known, it may be reported in one of two ways, depending on how the client registers for notification.
If the client uses the select function, success is reported in the writefds set and failure is reported in the exceptfds set.
If the client uses the functions WSAAsyncSelect or WSAEventSelect, the notification is announced with FD_CONNECT and the error code associated with the FD_CONNECT indicates either success or a specific reason for failure.

ZMQ messages not being received

Please forgive me if I'm missing something simple, this is my first time doing anything with messaging and I inherited this codebase from someone else.
I am trying to send a message from a windows machine with an IP of 10.10.10.200 to an Ubuntu machine with an IP of 10.10.10.15.
I got the following result when running TCPView from the Windows machine, which makes me suspect that the problem lies in the Ubuntu machine. If I'm reading that right, then my app on the windows machine has created a connection on port 5556 which is what it is supposed to do. In case I'm wrong, I'll include the windows code too.
my_app.exe 5436 TCP MY_COMPUTER 5556 MY_COMPUTER 0 LISTENING
Windows app code:
void
NetworkManager::initializePublisher()
{
globalContext = zmq_ctx_new();
globalPublisher = zmq_socket(globalContext, ZMQ_PUB);
string protocol = "tcp://*:";
string portNumber = PUBLISHING_PORT; //5556
string address = protocol + portNumber;
char *address_ptr = new char[address.size() + 1];
strncpy_s(address_ptr, address.size() + 1, address.c_str(), address.size());
int bind_res = zmq_bind(globalPublisher, address_ptr);
if (bind_res != 0)
{
cerr << "FATAL: couldn't bind to port[" << portNumber << "] and protocol [" << protocol << "]" << endl;
}
cout << " Connection: " << address << endl;
}
void
NetworkManager::publishMessage(MESSAGE msgToSend)
{
// Get the size of the message to be sent
int sizeOfMessageToSend = MSG_MAX_SIZE;//sizeof(msgToSend);
// Copy IDVS message to buffer
char buffToSend[MSG_MAX_SIZE] = "";
// Pack the message id
size_t indexOfId = MSG_ID_SIZE + 1;
size_t indexOfName = MSG_NAME_SIZE + 1;
size_t indexOfdata = MSG_DATABUFFER_SIZE + 1;
memcpy(buffToSend, msgToSend.get_msg_id(), indexOfId - 1);
// Pack the message name
memcpy(buffToSend + indexOfId, msgToSend.get_msg_name(), indexOfName - 1);
// Pack the data buffer
memcpy(buffToSend + indexOfId + indexOfName, msgToSend.get_msg_data(), indexOfdata - 1);
// Send message
int sizeOfSentMessage = zmq_send(globalPublisher, buffToSend, MSG_MAX_SIZE, ZMQ_DONTWAIT);
getSubscriptionConnectionError();
// If message size doesn't match, we have an issue, otherwise, we are good
if (sizeOfSentMessage != sizeOfMessageToSend)
{
int errorCode = zmq_errno();
cerr << "FATAL: couldn't not send message." << endl;
cerr << "ERROR: " << errorCode << endl;
}
}
I can include more of this side's code if you think it's needed, but the error is popping up on the Ubuntu side, so I'm going to focus there.
The problem is when I call zmq_recv it returns -1 and when I check zmq_errno I get EAGAIN (Non-blocking mode was requested and no messages are available at the moment.) I also checked with netstat and I didn't see anything on port 5556
First is the function to connect to the publisher, then the function to get data, followed by main.
Ubuntu side code:
void
*connectoToPublisher()
{
void *context = zmq_ctx_new();
void *subscriber = zmq_socket(context, ZMQ_SUB);
string protocol = "tcp://";
string ipAddress = PUB_IP; //10.10.10.15
string portNumber = PUB_PORT; // 5556
string address = protocol + ipAddress + ":" + portNumber;
cout << "Address: " << address << endl;
char *address_ptr = new char[address.size() + 1];
strcpy(address_ptr, address.c_str());
// ------ Connect to Publisher ------
bool isConnectionEstablished = false;
int connectionStatus;
while (isConnectionEstablished == false)
{
connectionStatus = zmq_connect(subscriber, address_ptr);
switch (connectionStatus)
{
case 0: //we are good.
cout << "Connection Established!" << endl;
isConnectionEstablished = true;
break;
case -1:
isConnectionEstablished = false;
cout << "Connection Failed!" << endl;
getSubscriptionConnectionError();
cout << "Trying again in 5 seconds..." << endl;
break;
default:
cout << "Hit default connecting to publisher!" << endl;
break;
}
if (isConnectionEstablished == true)
{
break;
}
sleep(5); // Try again
}
// by the time we get here we should have connected to the pub
return subscriber;
}
static void *
getData(void *subscriber)
{
const char *filter = ""; // Get all messages
int subFilterResult = zmq_setsockopt(subscriber, ZMQ_SUBSCRIBE, filter, strlen(filter));
// ------ Get in main loop ------
while (1)
{
//get messages from publisher
char bufferReceived[MSG_MAX_SIZE] = "";
size_t expected_messageSize = sizeof(bufferReceived);
int actual_messageSize = zmq_recv(subscriber, bufferReceived, MSG_MAX_SIZE, ZMQ_DONTWAIT);
if (expected_messageSize == actual_messageSize)
{
MESSAGE msg = getMessage(bufferReceived); //Uses memcpy to copy id, name, and data strutct data from buffer into struct of MESSAGE
if (strcmp(msg.get_msg_id(), "IDXY_00000") == 0)
{
DATA = getData(msg); //Uses memcpy to copy data from buffer into struct of DATA
}
} else
{
// Something went wrong
getReceivedError(); //This just calls zmq_errno and cout the error
}
usleep(1);
}
}
int main (int argc, char*argv[])
{
//Doing some stuff...
void *subscriber_socket = connectoToHeadTrackerPublisher();
// Initialize Mux Lock
pthread_mutex_init(&receiverMutex, NULL);
// Initializing some variables...
// Launch Thread to get updates from windows machine
pthread_t publisherThread;
pthread_create(&publisherThread,
NULL, getData, subscriber_socket);
// UI stuff
zmq_close(subscriber_socket);
return 0;
}
If you cannot provide a solution, then I will accept identifying the problem as a solution. My main issue is that I don't have the knowledge or experience with messaging or networking to correctly identify the issue. Typically if I know what is wrong, I can fix it.
Ok, this has nothing to do with signalling / messaging framework
Your Ubuntu code instructs the ZeroMQ Context()-instance engine to create a new SUB-socket instance and next the code insist this socket to try to _connect() ( to setup a tcp:// transport-class connection towards the peering counterparty ) to "opposite" access-point, sitting on an address of the Ubuntu localhost:port# that was setup as 10.10.10.15:5556, while the intended PUB-side archetype access-point actually lives not on this Ubuntu machine, but on another, Windows host, IP:port# of which is 10.10.10.200:5556
This seems to be the root-cause of the problem, so change it accordingly to match the physical layout and you may get the toys work.

Trying to connect() multiple times in TCP

I'm writing a client/server application where the client and server should send data to each other via a TCP socket. The client should connect to the server and if the connection fails, it should wait a few seconds and then try again to connect to it (up to a certain number of tries).
This is the code I currently have:
const int i_TRIES = 5;
time_t t_timeout = 3000;
int i_port = 5678;
int i_socket;
string s_IP = "127.0.0.1";
for(int i = 0; i < i_TRIES; i++)
{
if((i_socket = socket(AF_INET, SOCK_STREAM, 0)) < 0)
{
cout << "[Client]: Socket creation failed." << endl;
exit(EXIT_FAILURE);
}
memset(&server_address, '0', sizeof(server_address));
server_address.sin_family = AF_INET;
server_address.sin_port = htons(i_port);
if(inet_pton(AF_INET, s_IP.c_str(), &server_address.sin_addr) <= 0)
{
cout << "[Client]: Invalid IP address." << endl;
exit(EXIT_FAILURE);
}
if(connect(i_socket, (struct sockaddr *)&server_address, sizeof(server_address)) < 0)
{
if(i < i_TRIES - 2)
{
cout << "[Client]: Connection to server failed. Trying again in " << t_timeout << " ms." << endl;
close(i_socket);
sleep(t_timeout);
}
else
{
cout << "[Client]: Could not connect to server, exiting." << endl;
exit(EXIT_FAILURE);
}
}
else
{
cout << "[Client]: Successfully connected to server." << endl;
break;
}
}
// do stuff with socket
The issue I'm having is that the first call to connect() works as expected, it fails if there's no server and then the loop repeats, however, the second time connect() blocks forever (or at least for much longer than I want it to). Initially, my loop was just around the connect() if block (code below), and this also caused the same problem. After that I included the whole socket setup (the code above) in the loop, but that also didn't help. I also tried closing the socket after a failed connection, but this didn't help either.
Initial for loop:
// other stuff from above here
for(int i = 0; i < i_TRIES; i++)
{
if(connect(i_socket, (struct sockaddr *)&server_address, sizeof(server_address)) < 0)
{
if(i < i_TRIES - 2)
{
cout << "[Client]: Connection to server failed. Trying again in " << t_timeout << " ms." << endl;
sleep(t_timeout);
}
else
{
cout << "[Client]: Could not connect to server, exiting." << endl;
exit(EXIT_FAILURE);
}
}
else
{
cout << "[Client]: Successfully connected to server." << endl;
break;
}
}
// do stuff with socket
Can I force connect() to return after a certain amount of time has passed? Or is there a way to get the connect() function to try multiple times on it's own? Or is there something I need to do to the socket to reset everything before I can try again? I hope this isn't a dumb question, I couldn't find any information about how to connect multiple times.
Thanks in advance!
Can I force connect() to return after a certain amount of time has passed?
No. You must put the socket into non-blocking mode and then use select() or (e)poll() to provide timeout logic while you wait for the socket to connect. If the connection fails, or takes too long to connect, close the socket, create a new one, and try again.
Or is there a way to get the connect() function to try multiple times on it's own?
No. It can perform only 1 connection attempt per call.
Or is there something I need to do to the socket to reset everything before I can try again?
There is no guarantee that you can even call connect() multiple times on the same socket. On some platforms, you must destroy the socket and create a new socket before you call connect() again. You should get in the habit of doing that for all platforms.
Put the socket into non-blocking mode and use select() to implement the timeout. Select for writeability on the socket. Note that you can decrease the platform connect timeout by this means, but not increase it.
The sleep() is pointless, just literally a waste of time.

Forking server for handling many clients c++

I'm kinda new to network programming and I'm trying to make a socket server that could handle multiple clients. The server will be a connection between players and a game engine for a text-based adventure, written in c++.
I got the code working for single clients, and for sending data between client and server. The next step in the implementation is to make it able to handle multiple clients. For what I understand fork is way to do this. I've got this code this far, but I can't for my life get it to work.
while (1) {
cout << "Server waiting." << endl;
n = sizeof(struct sockaddr_in);
if ((clientSocket = accept(servSocket, (struct sockaddr*) (&client), (socklen_t*) (&n))) < 0) {
cerr << "Error: " << errno << ": " << strerror(errno) << endl;
}
if(fork() == 0){
cout << "Child process created. Handling connection with << " << inet_ntop(AF_INET, &client.sin_addr, buff, sizeof(buff)) << endl;
close(servSocket);
}
string sendmsg;
string recvmsg;
int bytesRecieved = 0;
char package[1024];
string playerMessage;
while(1){
bytesRecieved = recv(clientSocket, package, 1024, 0);
for (int offset = 0; offset < bytesRecieved/sizeof(char); ++offset) {
playerMessage += package[offset];
}
cout << playerMessage;
cin >> sendmsg;
sendmsg += "\n";
send(clientSocket, sendmsg.c_str(), sendmsg.size(), 0);
}
}
close(clientSocket);
close(servSocket);
return 0;
I understand that the bind() and everything before that should happend before the main-loop with fork() in it, so I didn't bother to include that.
Thanks on beforehand!
Creating process per connection is a wrong way in most cases. What if you have 20'000 players? Context switching for 20'000 processes makes too much overhead slowing down the server.
Consider using async programming. boost::asio is one of the best choices then.

"Connection was broken" error with UDT (UDP-based data transfer protocol)

I am programming a real-time game in which I need reliable UDP, so I've chosen to work with UDT (UDP-based data transfer protocol - http://sourceforge.net/projects/udt/).
The clients (on browsers) send real-time messages to my server via CGI scripts. The problem is that there are some messages that are being lost, and I don't know why because the server says that it sent all the messages successfully to the corresponding clients, but sometimes the client doesn't receive the message.
In my debug file, I've found that when a message is not received by the client, its script says:
error in recv();
recv: Connection was broken.
I would like to get some help on how the server shall know if the client got its message; should I send a NACK or something from the client side? I thought that UDT should do that for me. Can someone clarify this situation?
The relevant sections of the communication parts of my code are bellow, with some comments:
server's relevant code:
//...
void send_msg_in(player cur, char* xml){
/*this function stores the current message, xml, in a queue if xml!=NULL, and sends the 1st message of the queue to the client*/
/*this function is called when the player connects with the entering xml=NULL to get the 1st message of the queue,
or with xml!=NULL when a new message arrives: in this case the message is stored in the queue, and then the message will be sent in the appropriate time, i.e. the messages are ordered.*/
char* msg_ptr=NULL;
if (xml!=NULL){ //add the message to a queue (FIFO), the cur.xml_msgs
msg_ptr=(char*) calloc(strlen(xml)+1, sizeof(char));
strcpy(msg_ptr, xml);
(*(cur.xml_msgs)).push(msg_ptr);
} //get the 1st message of the queue
if (!(*(cur.xml_msgs)).empty()){
xml=(*(cur.xml_msgs)).front();
}
if (cur.get_udt_socket_in()!=NULL){
UDTSOCKET cur_udt = *(cur.get_udt_socket_in());
// cout << "send_msg_in(), cur_udt: " << cur_udt << endl;
//send the "xml", i.e. the 1st message of the queue...
if (UDT::ERROR == UDT::send(cur_udt, xml, strlen(xml)+1, 0)){
UDT::close(cur_udt);
cur.set_udt_socket_in(NULL);
}
else{ //if no error this else is reached
cout << "TO client:\n" << xml << "\n"; /*if there is no error,
i.e. on success, the server prints the message that was sent.*/
// / \
// /_!_\
/*the problem is that
the messages that are lost don't appear on the client side,
but they appear here on the server! */
if (((string) xml).find("<ack.>")==string::npos){
UDT::close(cur_udt);
cur.set_udt_socket_in(NULL); //close the socket
}
(*(cur.xml_msgs)).pop();
}
}
}
//...
client's relevant code:
//...
#define MSGBUFSIZE 1024
char msgbuf[MSGBUFSIZE];
UDTSOCKET client;
ofstream myfile;
//...
main(int argc, char *argv[]){
//...
// connect to the server, implict bind
if (UDT::ERROR == UDT::connect(client, (sockaddr*)&serv_addr, sizeof(serv_addr))){
cout << "error in connect();" << endl;
return 0;
}
myfile.open("./log.txt", ios::app);
send(xml);
char* cur_xml;
do{
cur_xml = receive(); //wait for an ACK or a new message...
myfile << cur_xml << endl << endl; // / \
/* /_!_\ the lost messages don't appear on the website
neither on this log file.*/
} while (((string) cur_xml).find("<ack.>")!=string::npos);
cout << cur_xml << endl;
myfile.close();
UDT::close(client);
return 0;
}
char* receive(){
if (UDT::ERROR == UDT::recv(client, msgbuf, MSGBUFSIZE, 0)){
// / \
/* /_!_\ when a message is not well received
this code is usually reached, and an error is printed.*/
cout << "error in recv();" << endl;
myfile << "error in recv();" << endl;
myfile << "recv: " << UDT::getlasterror().getErrorMessage() << endl << endl;
return 0;
}
return msgbuf;
}
void* send(string xml){
if (UDT::ERROR == UDT::send(client, xml.c_str(), strlen(xml.c_str())+1, 0)){
cout << "error in send();" << endl;
myfile << "error in send();" << endl;
myfile << "send: " << UDT::getlasterror().getErrorMessage() << endl << endl;
return 0;
}
}
Thank you for any help!
PS. I tried to increase the linger time on close(), after finding the link http://udt.sourceforge.net/udt4/doc/opt.htm, adding the following to the server's code:
struct linger l;
l.l_onoff = 1;
l.l_linger = ...; //a huge value in seconds...
UDT::setsockopt(*udt_socket_ptr, 0, UDT_LINGER, &l, sizeof(l));
but the problem is still the same...
PPS. the other parts of the communication in the server side are: (note: it seams for me that they are not so relevant)
main(int argc, char *argv[]){
char msgbuf[MSGBUFSIZE];
UDTSOCKET serv = UDT::socket(AF_INET, SOCK_STREAM, 0);
sockaddr_in my_addr;
my_addr.sin_family = AF_INET;
my_addr.sin_port = htons(PORT);
my_addr.sin_addr.s_addr = INADDR_ANY;
memset(&(my_addr.sin_zero), '\0', sizeof(my_addr.sin_zero));
if (UDT::ERROR == UDT::bind(serv, (sockaddr*)&my_addr, sizeof(my_addr))){
cout << "error in bind();";
return 0;
}
UDT::listen(serv, 1);
int namelen;
sockaddr_in their_addr;
while (true){
UDTSOCKET recver = UDT::accept(serv, (sockaddr*)&their_addr, &namelen);
if (UDT::ERROR == UDT::recv(recver, msgbuf, MSGBUFSIZE, 0)){
//this recv() function is called only once for each aqccept(), because the clients call CGI scripts via a browser, they need to call a new CGI script with a new UDT socket for each request (this in in agreement to the clients' code presented before).
cout << "error in recv();" << endl;
}
char* player_xml = (char*) &msgbuf;
cur_result = process_request((char*) &msgbuf, &recver, verbose); //ACK
}
}
struct result process_request(char* xml, UDTSOCKET* udt_socket_ptr, bool verbose){
//parse the XML...
//...
player* cur_ptr = get_player(me); //searches in a vector of player, according to the string "me" of the XML parsing.
UDTSOCKET* udt_ptr = (UDTSOCKET*) calloc(1, sizeof(UDTSOCKET));
memcpy(udt_ptr, udt_socket_ptr, sizeof(UDTSOCKET));
if (cur_ptr==NULL){
//register the player:
player* this_player = (player*) calloc(1, sizeof(player));
//...
}
}
else if (strcmp(request_type.c_str(), "info_waitformsg")==0){
if (udt_ptr!=NULL){
cur_ptr->set_udt_socket_in(udt_ptr);
if (!(*(cur_ptr->xml_msgs)).empty()){
send_msg_in(*cur_ptr, NULL, true);
}
}
}
else{ //messages that get instant response from the server.
if (udt_ptr!=NULL){
cur_ptr->set_udt_socket_out(udt_ptr);
}
if (strcmp(request_type.c_str(), "info_chat")==0){
info_chat cur_info;
to_object(&cur_info, me, request_type, msg_ptr); //convert the XML string values to a struct
process_chat_msg(cur_info, xml);
}
/* else if (...){ //other types of messages...
}*/
}
}
void process_chat_msg(info_chat cur_info, char* xml_in){
player* player_ptr=get_player(cur_info.me);
if (player_ptr){
int i=search_in_matches(matches, cur_info.match_ID);
if (i>=0){
match* cur_match=matches[i];
vector<player*> players_in = cur_match->followers;
int n=players_in.size();
for (int i=0; i<n; i++){
if (players_in[i]!=msg_owner){
send_msg_in(*(players_in[i]), xml, flag);
}
}
}
}
}
Looking at the UDT source code at http://sourceforge.net/p/udt/git/ci/master/tree/udt4/src/core.cpp, the error message "Connection was broken" is produced when either of the Boolean flags m_bBroken or m_bClosing is true and there is no data in the receive buffer.
Those flags are set in just a few cases:
In sections of code marked "should not happen; attack or bug" (unlikely)
In deliberate close or shutdown actions (don't see this happening in your code)
In expiration of a timer that checks for peer activity (the likely culprit)
In that source file at line 2593 it says:
// Connection is broken.
// UDT does not signal any information about this instead of to stop quietly.
// Application will detect this when it calls any UDT methods next time.
//
m_bClosing = true;
m_bBroken = true;
// ...[code omitted]...
// app can call any UDT API to learn the connection_broken error
Looking at the send() call, I don't see anywhere that it waits for an ACK or NAK from the peer before returning, so I don't think a successful return from send() on the server side is indicative of successful receipt of the message by the client.
You didn't show the code on the server side that binds to the socket and listens for responses from the client; if the problem is there then the server might be happily sending messages and never listening to the client that is trying to respond.
UDP is not a guaranteed-transmission protocol. A host will send a message, but if the recipient does not receive it, or if it is not received properly, no error will be raised. Therefore, it is commonly used in applications that require speed over perfect delivery, such as games. TCP does guarantee delivery, because it requires that a connection be set up first, and each message is acknowledged by the client.
I would encourage you to think about whether you actually need guaranteed receipt of that data, and, if you do, consider using TCP.