C++ socket windows - c++

I have question.
I create socket , connect , send bytes , all is ok.
and for receiving data i use recv function.
char * TOReceive= new char[200];
recv(ConnectSocket, TOReceive , 200, 0);
when there are some data it reads and retuns, succefull , and when no data waits for data, all i need to limit waiting time, for example if 10 seconds no data it should return.
Many Thanks.

Windows sockets has the select function. You pass it the socket handle and a socket to check for readability, and a timeout, and it returns telling whether the socket became readable or whether the timeout was reached.
See: http://msdn.microsoft.com/en-us/library/ms740141(VS.85).aspx
Here's how to do it:
bool readyToReceive(int sock, int interval = 1)
{
fd_set fds;
FD_ZERO(&fds);
FD_SET(sock, &fds);
timeval tv;
tv.tv_sec = interval;
tv.tv_usec = 0;
return (select(sock + 1, &fds, 0, 0, &tv) == 1);
}
If it returns true, your next call to recv should return immediately with some data.
You could make this more robust by checking select for error return values and throwing exceptions in those cases. Here I just return true if it says one handle is ready to read, but that means I return false under all other circumstances, including the socket being already closed.

You have to call the select function prior to calling recv to know if there is something to be read.

You can use SO_RCVTIMEO socket option to specify the timeout value for recv() call.

Related

TCP C send data when not receiving data

I'm trying to send data to the connected client, even when the client did not send me a message first.
This is my current code:
while (true) {
// open a new socket to transmit data per connection
int sock;
if ((sock = accept(listen_sock, (sockaddr *) &client_address, &client_address_len)) < 0) {
logger.log(TYPE::ERROR, "server::could not open a socket to accept data");
exit(0);
}
int n = 0, total_received_bytes = 0, max_len = 4096;
std::vector<char> buffer(max_len);
logger.log(TYPE::SUCCESS,
"server::client connected with ip address: " + std::string(inet_ntoa(client_address.sin_addr)));
// keep running as long as the client keeps the connection open
while (true) {
n = recv(sock, &buffer[0], buffer.size(), 0);
if (n > 0) {
total_received_bytes += n;
std::string str(buffer.begin(), buffer.end());
KV key_value = kv_from(vector_from(str));
messaging.set_command(key_value);
}
std::string message = "hmc::" + messaging.get_value("hmc") + "---" + "sonar::" + messaging.get_value("sonar") + "\n";
send(sock, message.c_str(), message.length(), 0);
}
logger.log(TYPE::INFO, "server::connection closed");
close(sock);
}
I thought by moving the n = recv(sock, &buffer[0], buffer.size(), 0); outside the while condition that it would send the data indefinitely, but that is not what happened.
Thanks in advance.
Solution
Adding MSG_DONTWAIT to the recv function enabled non-blocking operations which I was looking for.
First I will explain, why it does not work, then I will make a proposal for solutions. Basically you will find the answer in the man7.org > Linux > man-pages and for recv specifially here.
When the function "recv" is called, then it will not return, until data is available and can be read. This behavior of functions is called "blocking". Means, the current execution thread is blocked until data has been read.
So, calling the function
n = recv(sock, &buffer[0], buffer.size(), 0);
as you did, causes the trouble. You need also to check the return code. 0 means, connection closed, -1 means error and you must check errno for further information.
You can modify the socket to work in non-blocking mode with the function fnctl and the O_NONBLOCK flag, for the lifetime of the socket. You can also use the the flag MSG_DONTWAIT as 4th parameter (flags), to unblock the function on a per-function-call base.
In both cases, if no data is available, the functions returns a -1 and you need to check errno for EAGAIN or EWOULDBLOCK.
return value 0 indicates that the connection has been closed.
But from the architecture point of view, I would not recommend to use this approach. You could use multiple threads for receiving and sending data, or, using Linux, one of select, poll or similar functions. There is even a common design pattern for this. It is called "reactor", There are also related patterns like "Acceptor/Connector" and "Proactor"/"ACT" available. If you plan to write a more robust application, then you may consider those.
You will find an implementation of Acceptor, Connector, Reactor, Proactor, ACT here
Hope this helps

Set connection timeout for SSL_read in OpenSSL in C++

I am developing a linux application which communicates with OpenSSL. I am currently running some robustness tests and one of the is giving me a hard time.
I plug out the Ethernet cable when my program is downloading a big file and I wish that it stops after 30seconds for example. But it never stop.
I use SSL_read and this is where it blocks :
count = SSL_read(ssl, buffer, BUFSIZE);
Is it possible to set a timeout to SSL_read ?
I have tried SSL_CTX_set_timeout() but it is not working. I have also seen that it was maybe possible to use select() but I don't understand how to use it with SSL_read()
You can do that in the same way as you do it with "normal" sockets. Basically, you set the timeval on a socket passed to ssl and the SSL_read will return -1 when the time set in timeval passes if nothing is received. Example below (uninteresting parts are written in pseudo):
struct timeval tv;
char buffer[1024];
// socket, bind, listen, ...
// accept
int new_fd = accept(...)
tv.tv_sec = 5; // 5 seconds
tv.tv_usec = 0;
setsockopt(new_fd, SOL_SOCKET, SO_RCVTIMEO, (const char*)&tv, sizeof tv);
// assign fd to ssl ...
// blocking method will return -1 if nothing is received after 5 seconds
int cnt = SSL_read(ssl, buffer, sizeof buffer);
if (cnt == -1) return; // connection error or timeout

Can someone explain the function of writeable and readable fd_sets with WinSock?

I'm writing a network game for a university project and while I have messages being sent and received between a client and a server, I'm unsure on how I would go about implementing a writeable fd_set (my lecturer's example code only included a readable fd_set) and what the function is of both fd_sets with select(). Any insight you could give would be great in helping me understand this.
My server code is as such:
bool ServerSocket::Update() {
// Update the connections with the server
fd_set readable;
FD_ZERO(&readable);
// Add server socket, which will be readable if there's a new connection
FD_SET(m_socket, &readable);
// Add connected clients' sockets
if(!AddConnectedClients(&readable)) {
Error("Couldn't add connected clients to fd_set.");
return false;
}
// Set timeout to wait for something to happen (0.5 seconds)
timeval timeout;
timeout.tv_sec = 0;
timeout.tv_usec = 500000;
// Wait for the socket to become readable
int count = select(0, &readable, NULL, NULL, &timeout);
if(count == SOCKET_ERROR) {
Error("Select failed, socket error.");
return false;
}
// Accept new connection to the server socket if readable
if(FD_ISSET(m_socket, &readable)) {
if(!AddNewClient()) {
return false;
}
}
// Check all clients to see if there are messages to be read
if(!CheckClients(&readable)) {
return false;
}
return true;
}
A socket becomes:
readable if there is either data in the socket receive buffer or a pending FIN (recv() is about to return zero)
writable if there is room in the socket receive buffer. Note that this is true nearly all the time, so you should use it only when you've encountered a prior EWOULDBLOCK/EAGAIN on the socket, and stop using it when you don't.
You'd create an fd_set variable called writeable, initialize it the same way (with the same sockets), and pass it as select's third argument:
select(0, &readable, &writeable, NULL, &timeout);
Then after select returns you'd check whether each socket is still in the set writeable. If so, then it's writeable.
Basically, exactly the same way readable works, except that it tells you a different thing about the socket.
select() is terribly outdated and it's interface is arcane. poll (or it's windows counterpart WSAPoll is a modern replacement for it, and should be always preferred.
It would be used in following manner:
WSAPOLLFD pollfd = {m_socket, POLLWRNORM, 0};
int rc = WSAPoll(&pollfd, 1, 100);
if (rc == 1) {
// Socket is ready for writing!
}

select() always returns 1; TCP connected socket troubles in c++

I'm doing a c++ project that requires a server to create a new thread to handle connections each time accept() returns a new socket descriptor. I am using select to decide when a connection attempt has taken place as well as when a client has sent data over the newly created client socket (the one that accept creates). So two functions and two selects - one for polling the socket dedicated to listening for connections, one for polling the socket created when a new connection is successful.
The behavior of the first case is what I expect - FD_ISSET returns true for the id of my listening socket only when a connection is requested, and is false until the next connection attempt. The second case does not work, even though the code is exactly the same with different fd_set and socket objects. I'm wondering if this stems from the TCP socket? Do these sockets always return true when polled by a select due to their streamy nature?
//working snippet
struct timeval tv;
tv.tv_sec = 0;
tv.tv_usec = 500000;
fd_set readfds;
FD_ZERO(&readfds);
FD_SET(sid,&readfds);
//start server loop
for(;;){
//check if listening socket has any client requrests, timeout at 500 ms
int numsockets = select(sid+1,&readfds,NULL,NULL,&tv);
if(numsockets == -1){
if(errno == 4){
printf("SIGINT recieved in select\n");
FD_ZERO(&readfds);
myhandler(SIGINT);
}else{
perror("server select");
exit(1);
}
}
//check if listening socket is ready to be read after select returns
if(FD_ISSET(sid, &readfds)){
int newsocketfd = accept(sid, (struct sockaddr*)&client_addr, &addrsize);
if(newsocketfd == -1){
if(errno == 4){
printf("SIGINT recieved in accept\n");
myhandler(SIGINT);
}else{
perror("server accept");
exit(1);
}
}else{
s->forkThreadForClient(newsocketfd);
}
}
//non working snippet
//setup clients socket with select functionality
struct timeval ctv;
ctv.tv_sec = 0;
ctv.tv_usec = 500000;
fd_set creadfds;
FD_ZERO(&creadfds);
FD_SET(csid,&creadfds);
for(;;){
//check if listening socket has any client requrests, timeout at 500 ms
int numsockets = select(csid+1,&creadfds,NULL,NULL,&ctv);
if(numsockets == -1){
if(errno == 4){
printf("SIGINT recieved in client select\n");
FD_ZERO(&creadfds);
myhandler(SIGINT);
}else{
perror("server select");
exit(1);
}
}else{
printf("Select returned %i\n",numsockets);
}
if(FD_ISSET(csid,&creadfds)){
//read header
unsigned char header[11];
for(int i=0;i<11;i++){
if(recv(csid, rubyte, 1, 0) != 0){
printf("Received %X from client\n",*rubyte);
header[i] = *rubyte;
}
}
Any help would be appreciated.
Thanks for the responses, but I don't believe it has much todo with the timeout value being inside the loop. I tested it and even with tv being reset and the fd_set being zeroed every time the server loops, select still returns 1 immediately. I feel like there's a problem with how select is treating my TCP socket. Any time I set selects highest socket id to encompass my TCP socket, it returns immediately with that socket set. Also, client does not send anything, just connects.
One thing you must do is reset the value of tv to your desired timeout every time before you call select(). The select() function changes the values in tv to indicate how much time is left in the timeout, after returning from the function. If you fail to do this, your select() calls will end up using a timeout of zero, which is not efficient.
Some other operating systems implement select() differently, in such a way that they don't change the value of tv. Linux does change it, so you must reset it.
Move
FD_ZERO(&creadfds);
FD_SET(csid,&creadfds);
into the loop. The function select() reports the result in this structure. You already retrieve the result with
FD_ISSET(csid,&creadfds);

Socket select() works in Windows and times out in Linux

I'm porting a windows network application to linux and faced a timeout problem with select call on linux. The following function blocks for the entire timeout value and returns while I checked with a packet sniffer that client has already sent the data.
int recvTimeOutTCP( SOCKET socket, long sec, long usec )
{
struct timeval timeout;
fd_set fds;.
timeout.tv_sec = sec;
timeout.tv_usec = usec;
FD_ZERO( &fds );
FD_SET( socket, &fds );
// Possible return values:
// -1: error occurred
// 0: timed out
// > 0: data ready to be read
cerr << "Waiting on fd " << socket << endl;
return select(1, &fds, 0, 0, &timeout);
}
I think the first parameter to select() should be socket+1.
You really should use another name as socket also is used for other things. Usually sock is used.
select on Windows ignores the first parameter. From MSDN:
C++
int select(
__in int nfds,
__inout fd_set *readfds,
__inout fd_set *writefds,
__inout fd_set *exceptfds,
__in const struct timeval *timeout
);
Parameters
nfds [in]
Ignored. The nfds parameter is included only for
compatibility with Berkeley sockets.
...
The issue is that the fd_set in linux is a bit array ( originally it was just a int, but then you could only watch the first 16 io's of your process ). In windows fd_set is an array of sockets with a length at the front (which is why windows doesn't need to know how many bits to watch).
The poll() function takes an array of records to watch on linux and has other benefits which make it a better choice than select().
int recvTimeOutTCP( SOCKET socket, long msec )
{
int iret ;
struct polldf sockpoll ;
sockpoll.fd= socket ;
sockpoll.events= POLLIN ;
return poll(& sockpoll, 1, msec) ;
}
From the man page of select:
int select(int nfds,
fd_set* restrict readfds,
fd_set* restrict writefds,
fd_set* restrict errorfds,
struct timeval* restrict timeout);
The first nfds descriptors are checked in each set; i.e., the descriptors from 0 through nfds-1 in the descriptor sets are examined.
Thus the first parameter to select should be socket + 1.
return select(socket + 1, &fds, 0, 0, &timeout);
The first parameter to select(...) is the number of file descriptor to check in the set. Your call is telling it to only look at file descriptor 0, which is almost certainly not what socket is set to.