Handeling SSL Client not reading all data - c++

I am trying to accomplish, that my ssl server does not break down, when a client does not collect all data. (fixed with one minor bug)
when the data is too long.
Basically what I'm trying to do is write in a non-blocking way. For that I found two different approaches:
First approach
using this code
int flags = fcntl(ret.fdsock, F_GETFL, 0);
fcntl(ret.fdsock, F_SETFL, flags | O_NONBLOCK);
and creating the ssl connection with it
Second approach:
Doing this directly after creating the SSL Object using SSL_new(ctx)
BIO *sock = BIO_new_socket(ret.fdsock, BIO_NOCLOSE);
BIO_set_nbio(sock, 1);
SSL_set_bio(client, sock, sock);
Both of which have their downsides, but neither of which helps solving the problem.
The first approach seems to read in a unblocking way just fine, but when I write more data, than the client reads, my server crashes.
The second approach does not seem to do anything, so my guess is, that I did something wrong or did not understand what a BIO actually does.
For more Information here is how the server writes to the client:
int SSLConnection::send(char* msg, const int size){
int rest_size = size;
int bytes_sent = 0;
char* begin = msg;
std::cout << "expected bytes to send: " << size << std::endl;
while(rest_size > 0) {
int tmp_bytes_sent = SSL_write(connection, begin, rest_size);
std::cout << "any error : " << ERR_get_error()<< std::endl;
std::cout << "tmp_bytes_sent: " << tmp_bytes_sent << std::endl;
if (tmp_bytes_sent < 0){
std::cout << tmp_bytes_sent << std::endl;
std::cout << "ssl error : " << SSL_get_error(this->connection, tmp_bytes_sent)<< std::endl;
} else {
bytes_sent += tmp_bytes_sent;
rest_size -= tmp_bytes_sent;
begin = msg+bytes_sent;
}
}
return bytes_sent;
}
Output:
expected bytes to send: 78888890
Betätigen Sie die <RETURN> Taste, um das Fenster zu schließen...
(means: hit <return> to close window)
EDIT: After people said, that I need to cache errors appropriate, here is my new code:
Setup:
connection = SSL_new(ctx);
if (connection){
BIO * sbio = BIO_new_socket(ret.fdsock, BIO_NOCLOSE);
if (sbio) {
BIO_set_nbio(sbio, false);
SSL_set_bio(connection, sbio, sbio);
SSL_set_accept_state(connection);
} else {
std::cout << "Bio is null" << std::endl;
}
} else {
std::cout << "client is null" << std::endl;
}
Sending:
int SSLConnection::send(char* msg, const int size){
if(connection == NULL) {
std::cout << "ERR: Connection is NULL" << std::endl;
return -1;
}
int rest_size = size;
int bytes_sent = 0;
char* begin = msg;
std::cout << "expected bytes to send: " << size << std::endl;
while(rest_size > 0) {
int tmp_bytes_sent = SSL_write(connection, begin, rest_size);
std::cout << "any error : " << ERR_get_error()<< std::endl;
std::cout << "tmp_bytes_sent: " << tmp_bytes_sent << std::endl;
if (tmp_bytes_sent < 0){
std::cout << tmp_bytes_sent << std::endl;
std::cout << "ssl error : " << SSL_get_error(this->connection, tmp_bytes_sent)<< std::endl;
break;
} else if (tmp_bytes_sent == 0){
std::cout << "tmp_bytes are 0" << std::endl;
break;
} else {
bytes_sent += tmp_bytes_sent;
rest_size -= tmp_bytes_sent;
begin = msg+bytes_sent;
}
}
return bytes_sent;
}
Using a client, that fetches 60 bytes, here is the output:
Output writing 1,000,000 Bytes:
expected bytes to send: 1000000
any error : 0
tmp_bytes_sent: 16384
any error : 0
tmp_bytes_sent: 16384
Betätigen Sie die <RETURN> Taste, um das Fenster zu schließen...
(translates to: hit <RETURN> to close window)
Output writing 1,000 bytes:
expected bytes to send: 1000
any error : 0
tmp_bytes_sent: 1000
connection closed <- expected output

First, a warning: non-blocking I/O over SSL is a rather baroque API, and it's difficult to use correctly. In particular, the SSL layer sometimes needs to read internal data before it can write user data (or vice versa), and the caller's code is expected to be able to handle that based on the error-codes feedback it gets from the SSL calls it makes. It can be made to work correctly, but it's not easy or obvious -- you are de facto required to implement a state machine in your code that echoes the state machine inside the SSL library.
Below is a simplified version of the logic that is required (it's extracted from the Write() method in this file which is part of this library, in case you want to see a complete, working implementation)
enum {
SSL_STATE_READ_WANTS_READABLE_SOCKET = 0x01,
SSL_STATE_READ_WANTS_WRITEABLE_SOCKET = 0x02,
SSL_STATE_WRITE_WANTS_READABLE_SOCKET = 0x04,
SSL_STATE_WRITE_WANTS_WRITEABLE_SOCKET = 0x08
};
// a bit-chord of SSL_STATE_* bits to keep track of what
// the SSL layer needs us to do next before it can make more progress
uint32_t _sslState = 0;
// Note that this method returns the number of bytes sent, or -1
// if there was a fatal error. So if this method returns 0 that just
// means that this function was not able to send any bytes at this time.
int32_t SSLSocketDataIO :: Write(const void *buffer, uint32 size)
{
int32_t bytes = SSL_write(_ssl, buffer, size);
if (bytes > 0)
{
// SSL was able to send some bytes, so clear the relevant SSL-state-flags
_sslState &= ~(SSL_STATE_WRITE_WANTS_READABLE_SOCKET | SSL_STATE_WRITE_WANTS_WRITEABLE_SOCKET);
}
else if (bytes == 0)
{
return -1; // the SSL connection was closed, so return failure
}
else
{
// The SSL layer's internal needs aren't being met, so we now have to
// ask it what its problem is, then give it what it wants. :P
int err = SSL_get_error(_ssl, bytes);
if (err == SSL_ERROR_WANT_READ)
{
// SSL can't write anything more until the socket becomes readable,
// so we need to go back to our event loop, wait until the
// socket select()'s as readable, and then call SSL_Write() again.
_sslState |= SSL_STATE_WRITE_WANTS_READABLE_SOCKET;
_sslState &= ~SSL_STATE_WRITE_WANTS_WRITEABLE_SOCKET;
bytes = 0; // Tell the caller we weren't able to send anything yet
}
else if (err == SSL_ERROR_WANT_WRITE)
{
// SSL can't write anything more until the socket becomes writable,
// so we need to go back to our event loop, wait until the
// socket select()'s as writeable, and then call SSL_Write() again.
_sslState &= ~SSL_STATE_WRITE_WANTS_READABLE_SOCKET;
_sslState |= SSL_STATE_WRITE_WANTS_WRITEABLE_SOCKET;
bytes = 0; // Tell the caller we weren't able to send anything yet
}
else
{
// SSL had some other problem I don't know how to deal with,
// so just print some debug output and then return failure.
fprintf(stderr,"SSL_write() ERROR!");
ERR_print_errors_fp(stderr);
}
}
return bytes; // Returns the number of bytes we actually sent
}

I think your problem is
rest_size -= bytes_sent;
You should do rest_size -= tmp_bytes_sent;
Also
if (tmp_bytes_sent < 0){
std::cout << tmp_bytes_sent << std::endl;
//its an error condition
return bytes_sent;
}
I dont know whether this will fix the issue, but the code you pasted has the above mentioned issues

When I write more data, than the client reads, my server crashes.
No it doesn't, unless you've violently miscoded something else that you haven't posted here. It either loops forever or it gets an error: probably ECONNRESET, which means the client has behaved as you described, and you've detected it, so you should close the connection and forget about him. Instead of which, you are just looping forever, trying to send the data to a broken connection, which can never happen.
And when you get an error, there's not much use in just printing a -1. You should print the error, with perror() or errno or strerror().
Speaking of looping forever, don't loop like this. SSL_write() can return 0, which you aren't handling at all: this will cause an infinite loop. See also David Schwartz's comments below.
NB you should definitely use the second approach. OpenSSL needs to know that the socket is in non-blocking mode.
Both of which have their downsides
Such as?
And as noted in the other answer,
rest_size -= bytes_sent;
should be
rest_size -= tmp_bytes_sent;

Related

C/C++ recv hangs altough local server sends data

I am having a hard time figuring out a bug in my TCP client-server app. The problem I am facing: in my recv function do-while loop, if the condition is bytes > 0, the function hangs forever. Replacing that with bytes == NMAX, everything works fine, UNLESS NMAX is equal to 1. A few side notes: doing a single send-recv works fine, but trying to do a send-recv and then recv-send hangs forever. NMAX is a constant set to 4096 by default. Server is ran first, then the client.
This is my send function:
ssize_t sendData(const std::string data, int fd)
{
ssize_t total = data.length(), bytes, sent = 0;
do
{
ssize_t chunk = total > NMAX ? NMAX : total;
bytes = send(fd, data.c_str() + sent, chunk, 0);
if (bytes == -1)
{
throw std::system_error(errno, std::generic_category(), "Error sending data");
}
total -= bytes;
sent += bytes;
} while (total > 0);
return sent;
}
This is my recv function:
std::string recvData(int fd)
{
ssize_t bytes;
std::string buffer;
do
{
std::vector<char> data(NMAX, 0);
bytes = recv(fd, &data[0], NMAX, 0);
if (bytes == -1)
{
throw std::system_error(errno, std::generic_category(), "Error receiving data");
}
buffer.append(data.cbegin(), data.cend());
} while (bytes > 0); // Replacing with bytes == NMAX partially fixes the issue, why?
return buffer;
}
This is the client's main function:
std::cout << "Sent " << sendData(data) << " bytes\n";
std::cout << "Received: " << recvData() << "\n";
And this is the server's main function:
std::cout << "Received: " << recvData(client) << "\n";
std::cout << "Sent " << sendData("Hello from the server side!", client) << " bytes\n";
The problem with your program is that the receiving side does not know how many bytes to receive in total. Therefore it will just endlessly try to read more bytes.
The reason why it "hangs" is that you perform a blocking system call (recv) which will only unblock if at least 1 more byte had been received. However since the peer does not send more data this will never happen.
To fix the issue you need to have a proper wire-format for your data which indicates how big the transmitted data is, or where it starts and ends. A common way to do this is to prefix data with it's length in binary form (e.g. a 32bit unsigned int in big endian format). Another way is to have indicators inside the data that indicate it's end (e.g. the \r\n\r\n line breaks in HTTP).
Btw: Your send function is not ideal for cases where data.length() == 0. In this case you perform a send system call with 0 bytes - which is rather unnecessary.

Why I am always receiving data on the server side of a socket in C++?

After opening a connection between client and server, I need to handle any write command sent to the server using the command read(); (i.e. when the client write(); the server should read(); right away).
It sounds to be a trivial problem. Firstly, I sent 58 bytes from the client. But, I am always receiving huge amount of data on the server side. Here you could find just the relevant part of code:
int sockfd, newsockfd;//, n0,n1,n2;
socklen_t clilen;
struct sockaddr_in serv_addr, cli_addr;
int reuse=1;
sockfd = socket(AF_INET, SOCK_STREAM, 0);
if (sockfd < 0)
cerr << "ERROR opening socket"<< endl;
if (setsockopt(sockfd, SOL_SOCKET,SO_REUSEADDR,&reuse, sizeof(int)) == -1)
cerr << "ERROR on reusing port"<< endl;
bzero((char *) &serv_addr, sizeof(serv_addr));
serv_addr.sin_family = AF_INET;
serv_addr.sin_port = htons(iport);
serv_addr.sin_addr.s_addr = INADDR_ANY;
if (bind(sockfd, (struct sockaddr *) &serv_addr,sizeof(serv_addr)) < 0)
cerr << "ERROR on binding"<< endl;
cout << "Listening on port: "<< iport<< endl;
listen(sockfd,1);
clilen = sizeof(cli_addr);
newsockfd = accept(sockfd, (struct sockaddr *) &cli_addr, &clilen);
if (newsockfd < 0)
cerr << "ERROR on accept" << endl;
while (1) {
size_t msgSize=0;
int n = read(newsockfd,&msgSize,sizeof(size_t));
cout << "Breakpoint " << msgSize<< endl;
// Reading bytes size from socket until 10MB
if ( n> 0 && msgSize< 10485760) {
byte bytes [msgSize];
if (read(newsockfd, bytes, msgSize) > 0) {
char ip [16];
memset (bytes + msgSize, '\0', MSGMAXSIZE - msgSize - 1);
if (read(newsockfd,ip,15) > 0) {
string cIP = (string)ip;
//cout << "Sender Ip: " << cIP << endl;
process p = currentView.getProcess(cIP);
message m(bytes,p);
cout << "*************************" << endl
<< "Message received:" << endl
<< "*****************" << endl;
m.print();
}
}
}
}
This is the result i got:
Listening on port: 4444
Connected to: 127.0.0.1:6666
Breakpoint 58
*************************
Message received:
*****************
Message text: I am trying to send a message
Message size: 58
Message sender: 127.0.0.1
Message stability: 0
**************************************************
Breakpoint 825634866
Breakpoint 808600630
Breakpoint 842478647
Breakpoint 959854903
Breakpoint 926303542
Breakpoint 876032050
Breakpoint 808601142
Breakpoint 892744503
Breakpoint 875971894
Breakpoint 825634866
Breakpoint 1144401970
Breakpoint 859256118
Breakpoint 825635639
Breakpoint 892745526
Breakpoint 775369265
Breakpoint 774909488
Breakpoint 14897
Segmentation fault
And here you could find the relevant part of code from the client side:
while (1)
{
if (!bufferMsg(m)) break;
}
bool bufferMsg(message m) // Sends a message (m) to a process (p)
{
mtx.lock();
if(fifoBuffer.size() < 5)
{
fifoBuffer.push_back(m);
size_t sizeMsg = m.getHeader().sizeMsg;
byte * bytes = m.getBytes();
if (!write(sendsockfd,&sizeMsg,sizeof(size_t)) || !write(sendsockfd,bytes,sizeMsg) || !write(sendsockfd,(char*)m.getHeader().sender.getIp().c_str(),strlen(m.getHeader().sender.getIp().c_str())))
cerr << "ERROR writing to socket"<< endl;
mtx.unlock();
return true;
}
else{
mtx.unlock();
return false;
}
}
Here you could find the header of the message:
typedef struct HeaderType {
size_t sizeMsg;
process sender; // The header.sender process
//view currentView; // the Current view
//iClock C; // reserved for later use
bool stability; // reserved for later use
}HeaderT;
PS: The terms message and process are some classes which I already created but are out of our concern.
Please feel free should you need more clarification or information.
I have the impression you think that client side write should be blocking and waits until the data is eaten up by the server. The OS is free to deliver as many bytes as it likes on a TCP stream.
You have a lot of if if(read(newsockfd, bytes, msgSize) > 0) in your code where you seem to silently assume that the read either fails completely or delivers exactly the amount of data you're waiting for. That doesn't need to be the case.
This:
if ( n> 0 && msgSize< 10485760) {
byte bytes [msgSize];
is dangerous since the byte array (which I assume is a typedef) gets allocated on the stack and I assume no OS on the planet accepts a 10MB local variable. But I might be wrong or even modern compilers start to silently allocate it on the heap. It's the top candidate for your segfault the first time msgSize <10MB. Better do something like:
std::auto_ptr<byte> bytes(new byte[msgSize]);
For your read in of msgSize better do something like:
int n = 0;
int nn = 0;
while((nn=read(newsockfd,((char *)&msgSize)+n,sizeof(size_t)-n)>0
&& n<sizeof(size_t)) {
n+=nn;
}
On the client site you do something like:
write(sendsockfd,(char*)m.getHeader().sender.getIp().c_str(),strlen(m.getHeader().sender.getIp().c_str())
To transfer something like an IP (I assume a string like 88.1.2.250) But on the server side you read it like:
read(newsockfd,ip,15)
which doesn't need to fit each other. That would lead to a frame shift in your read and the next msgSize is bogus. May I assume the the first msgSize you ever read is correct ? Under the assumption that the first read actually delivers sizeof(size-t).
size_t msgSize=0;
int n = 0;
do{
int t=read(newsockfd,((char*)&msgSize) + n, sizeof(size_t) - n);
if(t<0)
continue; //if no data is available (in nonblocking mode, or on timeout)
if(t==0)
break; //connection closed
n+=t; //increase counter n by the amount actually read
} while(n<sizeof(size_t));
cout << "Breakpoint " << msgSize<< endl;
// Reading msgSize bytes from socket until 10MB
if ( n> 0 && msgSize< 10485760) {
byte bytes [msgSize];
n=0;
int t;
while((t=read(newsockfd, bytes + n, msgSize - n)) > 0 //if something was read
&& (n+=t)<msgSize //and the total is below msgSize, we continue reading
|| t<0) //or when there is no data available, we will give it another attempt
{
}
if(t>0){
cout << "successful: " << n << endl;
} else {
cout << "only " << n << " of " << msgSize << "read" << endl;
}
}
Tricky parts explained:
((char*)&msgSize) + n
This casts the pointer to size_t to a pointer to char and + n increments the pointer by n-times the size of the type it points to.
(t=read(newsockfd, bytes + n, msgSize - n)) > 0
An assignment returns the assigned value. It has to be inside brackets, as without brackets the boolean result of the > comparison would be assigned to t.
Sidenote:
You should not send the raw binary representation of an integer value to another computer. The sender might uses a MSB byte order while the recipient could be using LSB. You should use the methods provided to convert from host byte order to network byte order. They are called htonl and ntohl (h:host, to:to, n:network l:long [4 bytes]).

"Connection was broken" error with UDT (UDP-based data transfer protocol)

I am programming a real-time game in which I need reliable UDP, so I've chosen to work with UDT (UDP-based data transfer protocol - http://sourceforge.net/projects/udt/).
The clients (on browsers) send real-time messages to my server via CGI scripts. The problem is that there are some messages that are being lost, and I don't know why because the server says that it sent all the messages successfully to the corresponding clients, but sometimes the client doesn't receive the message.
In my debug file, I've found that when a message is not received by the client, its script says:
error in recv();
recv: Connection was broken.
I would like to get some help on how the server shall know if the client got its message; should I send a NACK or something from the client side? I thought that UDT should do that for me. Can someone clarify this situation?
The relevant sections of the communication parts of my code are bellow, with some comments:
server's relevant code:
//...
void send_msg_in(player cur, char* xml){
/*this function stores the current message, xml, in a queue if xml!=NULL, and sends the 1st message of the queue to the client*/
/*this function is called when the player connects with the entering xml=NULL to get the 1st message of the queue,
or with xml!=NULL when a new message arrives: in this case the message is stored in the queue, and then the message will be sent in the appropriate time, i.e. the messages are ordered.*/
char* msg_ptr=NULL;
if (xml!=NULL){ //add the message to a queue (FIFO), the cur.xml_msgs
msg_ptr=(char*) calloc(strlen(xml)+1, sizeof(char));
strcpy(msg_ptr, xml);
(*(cur.xml_msgs)).push(msg_ptr);
} //get the 1st message of the queue
if (!(*(cur.xml_msgs)).empty()){
xml=(*(cur.xml_msgs)).front();
}
if (cur.get_udt_socket_in()!=NULL){
UDTSOCKET cur_udt = *(cur.get_udt_socket_in());
// cout << "send_msg_in(), cur_udt: " << cur_udt << endl;
//send the "xml", i.e. the 1st message of the queue...
if (UDT::ERROR == UDT::send(cur_udt, xml, strlen(xml)+1, 0)){
UDT::close(cur_udt);
cur.set_udt_socket_in(NULL);
}
else{ //if no error this else is reached
cout << "TO client:\n" << xml << "\n"; /*if there is no error,
i.e. on success, the server prints the message that was sent.*/
// / \
// /_!_\
/*the problem is that
the messages that are lost don't appear on the client side,
but they appear here on the server! */
if (((string) xml).find("<ack.>")==string::npos){
UDT::close(cur_udt);
cur.set_udt_socket_in(NULL); //close the socket
}
(*(cur.xml_msgs)).pop();
}
}
}
//...
client's relevant code:
//...
#define MSGBUFSIZE 1024
char msgbuf[MSGBUFSIZE];
UDTSOCKET client;
ofstream myfile;
//...
main(int argc, char *argv[]){
//...
// connect to the server, implict bind
if (UDT::ERROR == UDT::connect(client, (sockaddr*)&serv_addr, sizeof(serv_addr))){
cout << "error in connect();" << endl;
return 0;
}
myfile.open("./log.txt", ios::app);
send(xml);
char* cur_xml;
do{
cur_xml = receive(); //wait for an ACK or a new message...
myfile << cur_xml << endl << endl; // / \
/* /_!_\ the lost messages don't appear on the website
neither on this log file.*/
} while (((string) cur_xml).find("<ack.>")!=string::npos);
cout << cur_xml << endl;
myfile.close();
UDT::close(client);
return 0;
}
char* receive(){
if (UDT::ERROR == UDT::recv(client, msgbuf, MSGBUFSIZE, 0)){
// / \
/* /_!_\ when a message is not well received
this code is usually reached, and an error is printed.*/
cout << "error in recv();" << endl;
myfile << "error in recv();" << endl;
myfile << "recv: " << UDT::getlasterror().getErrorMessage() << endl << endl;
return 0;
}
return msgbuf;
}
void* send(string xml){
if (UDT::ERROR == UDT::send(client, xml.c_str(), strlen(xml.c_str())+1, 0)){
cout << "error in send();" << endl;
myfile << "error in send();" << endl;
myfile << "send: " << UDT::getlasterror().getErrorMessage() << endl << endl;
return 0;
}
}
Thank you for any help!
PS. I tried to increase the linger time on close(), after finding the link http://udt.sourceforge.net/udt4/doc/opt.htm, adding the following to the server's code:
struct linger l;
l.l_onoff = 1;
l.l_linger = ...; //a huge value in seconds...
UDT::setsockopt(*udt_socket_ptr, 0, UDT_LINGER, &l, sizeof(l));
but the problem is still the same...
PPS. the other parts of the communication in the server side are: (note: it seams for me that they are not so relevant)
main(int argc, char *argv[]){
char msgbuf[MSGBUFSIZE];
UDTSOCKET serv = UDT::socket(AF_INET, SOCK_STREAM, 0);
sockaddr_in my_addr;
my_addr.sin_family = AF_INET;
my_addr.sin_port = htons(PORT);
my_addr.sin_addr.s_addr = INADDR_ANY;
memset(&(my_addr.sin_zero), '\0', sizeof(my_addr.sin_zero));
if (UDT::ERROR == UDT::bind(serv, (sockaddr*)&my_addr, sizeof(my_addr))){
cout << "error in bind();";
return 0;
}
UDT::listen(serv, 1);
int namelen;
sockaddr_in their_addr;
while (true){
UDTSOCKET recver = UDT::accept(serv, (sockaddr*)&their_addr, &namelen);
if (UDT::ERROR == UDT::recv(recver, msgbuf, MSGBUFSIZE, 0)){
//this recv() function is called only once for each aqccept(), because the clients call CGI scripts via a browser, they need to call a new CGI script with a new UDT socket for each request (this in in agreement to the clients' code presented before).
cout << "error in recv();" << endl;
}
char* player_xml = (char*) &msgbuf;
cur_result = process_request((char*) &msgbuf, &recver, verbose); //ACK
}
}
struct result process_request(char* xml, UDTSOCKET* udt_socket_ptr, bool verbose){
//parse the XML...
//...
player* cur_ptr = get_player(me); //searches in a vector of player, according to the string "me" of the XML parsing.
UDTSOCKET* udt_ptr = (UDTSOCKET*) calloc(1, sizeof(UDTSOCKET));
memcpy(udt_ptr, udt_socket_ptr, sizeof(UDTSOCKET));
if (cur_ptr==NULL){
//register the player:
player* this_player = (player*) calloc(1, sizeof(player));
//...
}
}
else if (strcmp(request_type.c_str(), "info_waitformsg")==0){
if (udt_ptr!=NULL){
cur_ptr->set_udt_socket_in(udt_ptr);
if (!(*(cur_ptr->xml_msgs)).empty()){
send_msg_in(*cur_ptr, NULL, true);
}
}
}
else{ //messages that get instant response from the server.
if (udt_ptr!=NULL){
cur_ptr->set_udt_socket_out(udt_ptr);
}
if (strcmp(request_type.c_str(), "info_chat")==0){
info_chat cur_info;
to_object(&cur_info, me, request_type, msg_ptr); //convert the XML string values to a struct
process_chat_msg(cur_info, xml);
}
/* else if (...){ //other types of messages...
}*/
}
}
void process_chat_msg(info_chat cur_info, char* xml_in){
player* player_ptr=get_player(cur_info.me);
if (player_ptr){
int i=search_in_matches(matches, cur_info.match_ID);
if (i>=0){
match* cur_match=matches[i];
vector<player*> players_in = cur_match->followers;
int n=players_in.size();
for (int i=0; i<n; i++){
if (players_in[i]!=msg_owner){
send_msg_in(*(players_in[i]), xml, flag);
}
}
}
}
}
Looking at the UDT source code at http://sourceforge.net/p/udt/git/ci/master/tree/udt4/src/core.cpp, the error message "Connection was broken" is produced when either of the Boolean flags m_bBroken or m_bClosing is true and there is no data in the receive buffer.
Those flags are set in just a few cases:
In sections of code marked "should not happen; attack or bug" (unlikely)
In deliberate close or shutdown actions (don't see this happening in your code)
In expiration of a timer that checks for peer activity (the likely culprit)
In that source file at line 2593 it says:
// Connection is broken.
// UDT does not signal any information about this instead of to stop quietly.
// Application will detect this when it calls any UDT methods next time.
//
m_bClosing = true;
m_bBroken = true;
// ...[code omitted]...
// app can call any UDT API to learn the connection_broken error
Looking at the send() call, I don't see anywhere that it waits for an ACK or NAK from the peer before returning, so I don't think a successful return from send() on the server side is indicative of successful receipt of the message by the client.
You didn't show the code on the server side that binds to the socket and listens for responses from the client; if the problem is there then the server might be happily sending messages and never listening to the client that is trying to respond.
UDP is not a guaranteed-transmission protocol. A host will send a message, but if the recipient does not receive it, or if it is not received properly, no error will be raised. Therefore, it is commonly used in applications that require speed over perfect delivery, such as games. TCP does guarantee delivery, because it requires that a connection be set up first, and each message is acknowledged by the client.
I would encourage you to think about whether you actually need guaranteed receipt of that data, and, if you do, consider using TCP.

TCP/Ip network communication in c++

I am trying to write a threaded function that sends system information via Tcp/ip over the local network to another computer. I have been using sockets to achieve this and this has worked out quite allright thus far. But I am now at a point where this usually works but around 30% of the time I get error messages telling me that the socket can not be opened. I use the activeSocket library for the sockets.
#include "tbb/tick_count.h"
#include "ActiveSocket.h"
using namespace std;
CActiveSocket socket;
extern int hardwareStatus;
int establishTCP() {
char time[11];
int communicationFailed = 0;
memset(&time, 0, 11);
socket.Initialize();
socket.SetConnectTimeout(0, 20);
socket.SetSendTimeout(0, 20);
return communicationFailed;
}
int monitor() {
cout << "Monitor: init continious monitoring" << endl;
int communicationFailed;
tbb::tick_count monitorCounter = tbb::tick_count::now();
while (!closeProgram) {
tbb::tick_count currentTick = tbb::tick_count::now();
tbb::tick_count::interval_t interval;
interval = currentTick - monitorCounter;
if (interval.seconds() > 2) {
monitorCounter = tbb::tick_count::now();
communicationFailed = 1;
char buffer[256];
sprintf(buffer, "%d;", hardwareStatus);
establishTCP();
char *charip = new char[monitoringIP.size() + 1];
charip[monitoringIP.size()] = 0;
memcpy(charip, monitoringIP.c_str(), monitoringIP.size());
const uint8* realip = (const uint8 *) charip;
int monitorCount = 0;
cout << "Monitor: " << buffer << endl;
while (communicationFailed == 1 && monitorCount < 2) {
monitorCount++;
if (socket.Open(realip, 2417)) {
if (socket.Send((const uint8 *) buffer, strlen(buffer))) {
cout << "Monitor: Succeeded sending data" << endl;
communicationFailed = 0;
socket.Close();
} else {
socket.Close();
communicationFailed = 1;
cout << "Monitor: FAILED TO SEND DATA" << endl;
}
} else {
socket.Close();
communicationFailed = 1;
cout << "Monitor: FAILED TO OPEN SOCKET FOR DATA" << endl;
}
}
if (monitorCount == 2) cout << "Monitor: UNABLE TO SEND DATA" << endl;
}
}
return communicationFailed;
}
I think I am doing something wrong with these functions and that the problem is not on the other side of the line where this data is received. Can anyone see any obvious mistakes in this code that could cause the failure? I keep getting my own cout message "Monitor: FAILED TO OPEN SOCKET FOR DATA"
EDIT: With telnet everything works fine, 100% of the time
You can use netstat to check that the server is listening on the port and connections are being established. Snoop is another good application in your Armour for finding out what is going wrong. Another possibility is to use telnet to see if the client can connect to that IP address and port. As to the code I will take a look at it later to see if something has gone awry.
socket is a global variable. It might be re-used concurrently between two threads or sequentially inside one thread. In fact, the while(~closeProgram) loop indicates that you intend to use it sequentially.
Some documentation for CActiveSocket::Open reads: "Connection-based protocol sockets (CSocket::SocketTypeTcp) may successfully call Open() only once..."
Perhaps your program fails when you call .Open() twice on the same object.
I eventually found out the problem with my code. As the connection was unstable and working for 70% of the time it seemed to be a timeout issue. I removed the two timeout settings
socket.SetConnectTimeout(0, 20);
socket.SetSendTimeout(0, 20);
Now it works perfectly fine, thanks for the troubleshooting tips though!

Memory/Threads leaks, developing simple HTTP-server with WinSock2

I begin to develop my tool, which works with net at the TCP level, which will present simple functions of web-server.
In testing my program I have got very bad mistakes:
Memory leaks
Creating thousands of threads immediately
In taskmgr.exe you may see about ~1,5 of threads and about ~50kb of allocated memory.
Also, I compiled program as 32 bit, but in vmmap utility you may see a lot of 64 bit stacks. My OS is 64 bit, but in taskmgr.exe you may see *32 , I don’t know how 32 bit program uses 64 bit stack, maybe it’s normal for launching 32 bit program in 64 bit OS, but I have no knowledge about this design of OS, so I shall be very pleased , if you give me a piece of advice on this question.
So, why did my program creates immediately a lot of threads? ( I guess , cause of while(true) block ).
But , I want the next:
Create each thread for each new request
When request has been handled, then terminate the thread and free the memory
How should I remake my code?
Thanks!
Here is my code ( MS VC ++ 9 ):
#include <iostream>
#include <Windows.h>
#pragma comment(lib, "Ws2_32.lib")
typedef struct Header
{
friend struct Net;
private:
WORD wsa_version;
WSAData wsa_data;
SOCKET sock;
SOCKADDR_IN service;
char *ip;
unsigned short port;
public:
Header(void)
{
wsa_version = 0x202;
ip = "0x7f.0.0.1";
port = 0x51;
service.sin_family = AF_INET;
service.sin_addr.s_addr = inet_addr(ip);
service.sin_port = htons(port);
}
} Header;
typedef struct Net
{
private:
int result;
HANDLE thrd;
DWORD exit_code;
void WSAInit(WSAData *data, WORD *wsa_version)
{
result = WSAStartup(*wsa_version, &(*data));
if(result != NO_ERROR)
{
std::cout << "WSAStartup() failed with the error: " << result << std::endl;
}
else
{
std::cout << (*data).szDescription << " " << (*data).szSystemStatus << std::endl;
}
}
void SocketInit(SOCKET *my_socket)
{
(*my_socket) = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
if((*my_socket) == INVALID_SOCKET)
{
std::cout << "Socket initialization failed with the error: " << WSAGetLastError() << std::endl;
WSACleanup();
}
else
{
std::cout << "Socket initialization successful!" << std::endl;
}
}
void SocketBind(SOCKET *my_socket, SOCKADDR_IN *service)
{
result = bind((*my_socket), (SOCKADDR*)&(*service), sizeof(*service));
if(result == SOCKET_ERROR)
{
std::cout << "Socket binding failed with the error: " << WSAGetLastError() << std::endl;
closesocket((*my_socket));
WSACleanup();
}
else
{
std::cout << "Socket binding successful!" << std::endl;
}
result = listen(*my_socket, SOMAXCONN);
if(result == SOCKET_ERROR)
{
std::cout << "Socket listening failed with the error: " << WSAGetLastError() << std::endl;
}
else
{
std::cout << "Listening to the socket..." << std::endl;
}
}
static void SocketAccept(SOCKET *my_socket)
{
SOCKET sock_accept = accept((*my_socket), 0, 0);
if(sock_accept == INVALID_SOCKET)
{
std::cout << "Accept failed with the error: " << WSAGetLastError() << std::endl;
closesocket(*my_socket);
WSACleanup();
}
else
{
std::cout << "Client socket connected!" << std::endl;
}
char data[0x400];
int result = recv(sock_accept, data, sizeof(data), 0);
HandleRequest(data, result);
char *response = "HTTP/1.1 200 OK\r\nServer: Amegas.sys-IS/1.0\r\nContent-type: text/html\r\nSet-Cookie: ASD643DUQE7423HFDG; path=/\r\nCache-control: private\r\n\r\n<h1>Hello World!</h1>\r\n\r\n";
result = send(sock_accept, response, (int)strlen(response), 0);
if(result == SOCKET_ERROR)
{
std::cout << "Sending data via socket failed with the error: " << WSAGetLastError() << std::endl;
closesocket(sock_accept);
WSACleanup();
}
else
{
result = shutdown(sock_accept, 2);
}
}
static void HandleRequest(char response[], int length)
{
std::cout << std::endl;
for(int i = 0; i < length; i++)
{
std::cout << response[i];
}
std::cout << std::endl;
}
static DWORD WINAPI Threading(LPVOID lpParam)
{
SOCKET *my_socket = (SOCKET*)lpParam;
SocketAccept(my_socket);
return 0;
}
public:
Net(void)
{
Header *obj_h = new Header();
WSAInit(&obj_h->wsa_data, &obj_h->wsa_version);
SocketInit(&obj_h->sock);
SocketBind(&obj_h->sock, &obj_h->service);
while(true)
{
thrd = CreateThread(NULL, 0, &Net::Threading, &obj_h->sock, 0, NULL);
//if(GetExitCodeThread(thrd, &exit_code) != 0)
//{
// ExitThread(exit_code);
//}
}
delete &obj_h;
}
} Net;
int main(void)
{
Net *obj_net = new Net();
delete &obj_net;
return 0;
}
You should create the thread AFTER you accept a connection, not before.
What you are doing is creating a ton of threads, and then having each of them wait for a connection. Many of them have nothing to do. I don't even know if Windows' accept call is thread-safe - you might end up with multiple threads handling the same connection.
What you need to do instead is, in your main loop (Net's constructor while(true)), you need to call accept(). Since accept() blocks until it has a connection, this will cause the main thread to wait until somebody tries to connect. Then, when they do, you create another thread (or process - more likely on UNIX) to handle that connection. So, your loop now looks like this:
SOCKET sock_accept = accept((*my_socket), 0, 0);
if(sock_accept == INVALID_SOCKET)
{
std::cout << "Accept failed with the error: " << WSAGetLastError() << std::endl;
closesocket(*my_socket);
WSACleanup();
}
else
{
std::cout << "Client socket connected!" << std::endl;
}
thrd = CreateThread(NULL, 0, &Net::Threading, &obj_h->sock, 0, NULL);
//push back thrd into a std::vector<HANDLE> or something like that
//if you want to keep track of it for later: there's more than one thread
Then, delete that code you moved from SocketAccept into this loop. And then, for cosmetic purposes, I would change the name of SocketAccept to SocketHandleConnection.
Now, when your thread starts, it already has a connection, and all you need to do is handle the data (e.g. what you do starting at char data[0x400]).
If you want to handle cleanup for connections, there are a few ways to do this. One, since you are threaded, you can have the thread do its own cleanup. It shares memory with the main process, so you can do this. But in this example, I don't see anything you need to clean up.
Lastly, I think you don't understand what ExitThread does. According to MSDN:
ExitThread is the preferred method of exiting a thread in C code. However, in C++ code,
the thread is exited before any destructors can be called or any other automatic cleanup
can be performed. Therefore, in C++ code, you should return from your thread function.
So it appears that you don't need to call ExitThread- you just return from your function and the thread exits automatically. You don't need to call it from the main thread.
Finally, you should really (if you can) use the new standard C++ threads in c++11, and then if you put in a little bit of effort to port your code over to boost::asio, you'll have a completely cross platform application, with no need for windows API C ugliness :D
DISCLAIMER: I only have a passing understanding of Windows as most of my experience is related to UNIX. I have attempted to be as accurate as I can but if I have any misconceptions about how this knowledge converts over to Windows, well, I warned you.
Why are you creating threads in an infinite loop? This will, of course, create tons of threads. I am referring to this piece of code:
while(true)
{
thrd = CreateThread(NULL, 0, &Net::Threading, &obj_h->sock, 0, NULL);
}