Set connection timeout for SSL_read in OpenSSL in C++ - c++

I am developing a linux application which communicates with OpenSSL. I am currently running some robustness tests and one of the is giving me a hard time.
I plug out the Ethernet cable when my program is downloading a big file and I wish that it stops after 30seconds for example. But it never stop.
I use SSL_read and this is where it blocks :
count = SSL_read(ssl, buffer, BUFSIZE);
Is it possible to set a timeout to SSL_read ?
I have tried SSL_CTX_set_timeout() but it is not working. I have also seen that it was maybe possible to use select() but I don't understand how to use it with SSL_read()

You can do that in the same way as you do it with "normal" sockets. Basically, you set the timeval on a socket passed to ssl and the SSL_read will return -1 when the time set in timeval passes if nothing is received. Example below (uninteresting parts are written in pseudo):
struct timeval tv;
char buffer[1024];
// socket, bind, listen, ...
// accept
int new_fd = accept(...)
tv.tv_sec = 5; // 5 seconds
tv.tv_usec = 0;
setsockopt(new_fd, SOL_SOCKET, SO_RCVTIMEO, (const char*)&tv, sizeof tv);
// assign fd to ssl ...
// blocking method will return -1 if nothing is received after 5 seconds
int cnt = SSL_read(ssl, buffer, sizeof buffer);
if (cnt == -1) return; // connection error or timeout

Related

Setting both read() and send() timeouts for the same C socket on linux

I managed to set a read timeout for a C socket with:
struct timeval tv_read;
tv_read.tv_sec = 2;
tv_read.tv_usec = 0;
setsockopt(my_socket, SOL_SOCKET, SO_RCVTIMEO, (const char*)&tv_read, sizeof tv_read);
Now, I want to make this same socket timeouts for send operations. I have found in setsockopt that the proper way to set a timeout for send is quite similar, just replacing SO_RCVTIMEO by SO_SNDTIMEO:
struct timeval tv_write;
tv_write.tv_sec = 4;
tv_write.tv_usec = 0;
setsockopt(my_socket, SOL_SOCKET, SO_SNDTIMEO, (const char*)&tv_write, sizeof tv_write);
My question is if does a successive call of setsockopt overrides the previous one? Or should I actually call twice being one with SO_RCVTIMEO and other with SO_SNDTIMEO as shown above?
setsockopt() sets 1 specific option at a time. It is perfectly fine to set multiple options individually.

UDP socket select() returns 1 without delay (timeout) under some conditions

I fallen into problem during development of my client application.
I want to use non-blocking UDP sockets in my application to communicate with a server. I am using winsock2 library on Windows.
But... For some reason I have strange behavior of select() function under some conditions:
Socket don't have bound address and port (it is client-side socket, so it don't need it).
Before select() I send data to my local address and some port with sendto call.
For example: 192.168.1.2
Under these conditions select() instantly (without even waiting for timeout) returns 1. Like I have some packet ready to receive.
But if call recvFrom then it will sure return -1.
If I send my packets from client to any other address (which is not my address on LAN) then select() works as intended.
Also select() works as intented if don't send any packets to any address before calling select().
Socket initialization method:
bool CUdpSocket::initialize()
{
_handle = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
... error processing code, returns false if error...
}
Method which uses select(). This method works fine for server socket (with bound address and port).
bool CUdpSocket::waitData(s32 timeout_ms)
{
fd_set readset;
int result;
struct timeval tv;
// Initialize the set.
FD_ZERO(&readset);
FD_SET(_handle, &readset);
// Initialize time out struct.
tv.tv_sec = 0;
tv.tv_usec = timeout_ms * 1000;
result = select(_handle + 1, &readset, NULL, NULL, &tv);
// Timeout with no data.
if (result == 0) {
return false; // Get out of here!
}
// Error.
if (result < 0) {
// TODO: Maybe throw exception or do something.
return false;
} else if (!FD_ISSET(_handle, &readset)) {
return false; // No data!
}
// There is some data!
return true;
}
If you send a packet from an unbound UDP socket, the OS will pick an usused port for you and bind the socket to that port -- the UDP protocol requires that the sending port have an address to send from.
So if the packet you're sending results in a response, then it makes perfect sense for the select to return 1 -- that's the response to the packet you sent.

How to set a timeout for a UDP socket in C++ (Unix)? [duplicate]

I am trying to set a 100ms timeout on a UDP Socket. I am using C. I have posted relavent pieces of my code below. I am not sure why this is not timing out, but just hangs when it doesn't receive a segment. Does this only work on sockets that are not bound using the bind() method?
#define TIMEOUT_MS 100 /* Seconds between retransmits */
if ((rcv_sock = socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP)) < 0)
DieWithError("socket() failed");
if ((rcv_sock = socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP)) < 0)
DieWithError("socket() failed");
//set timer for recv_socket
static int timeout = TIMEOUT_MS;
setsockopt(rcv_sock, SOL_SOCKET, SO_RCVTIMEO,(char*)&timeout,sizeof(timeout));
if(recvfrom(rcv_sock, ackBuffer,sizeof(ackBuffer), 0,
(struct sockaddr *) &servAddr2, &fromSize) < 0){
//timeout reached
printf("Timout reached. Resending segment %d\n", seq_num);
num_timeouts++;
}
The SO_RCVTIMEO option expects a struct timeval defined in sys/time.h, not an integer like you're passing to it. The timeval struct has as field for seconds and a field for microseconds. To set the timeout to 100ms, the following should do the trick:
struct timeval tv;
tv.tv_sec = 0;
tv.tv_usec = 100000;
if (setsockopt(rcv_sock, SOL_SOCKET, SO_RCVTIMEO,&tv,sizeof(tv)) < 0) {
perror("Error");
}
I have the same problem. I tried to adopt the solution you suggested, using the timeval struct. But it did not seem to work.
I have read on the Microsoft documentation and the time should be a DWORD with the number of milliseconds, but there is also another thing to do, If the socket is created using the WSASocket function, then the dwFlags parameter must have the WSA_FLAG_OVERLAPPED attribute set for the timeout to function properly.
Otherwise the timeout never takes effect.

select() always returns 1; TCP connected socket troubles in c++

I'm doing a c++ project that requires a server to create a new thread to handle connections each time accept() returns a new socket descriptor. I am using select to decide when a connection attempt has taken place as well as when a client has sent data over the newly created client socket (the one that accept creates). So two functions and two selects - one for polling the socket dedicated to listening for connections, one for polling the socket created when a new connection is successful.
The behavior of the first case is what I expect - FD_ISSET returns true for the id of my listening socket only when a connection is requested, and is false until the next connection attempt. The second case does not work, even though the code is exactly the same with different fd_set and socket objects. I'm wondering if this stems from the TCP socket? Do these sockets always return true when polled by a select due to their streamy nature?
//working snippet
struct timeval tv;
tv.tv_sec = 0;
tv.tv_usec = 500000;
fd_set readfds;
FD_ZERO(&readfds);
FD_SET(sid,&readfds);
//start server loop
for(;;){
//check if listening socket has any client requrests, timeout at 500 ms
int numsockets = select(sid+1,&readfds,NULL,NULL,&tv);
if(numsockets == -1){
if(errno == 4){
printf("SIGINT recieved in select\n");
FD_ZERO(&readfds);
myhandler(SIGINT);
}else{
perror("server select");
exit(1);
}
}
//check if listening socket is ready to be read after select returns
if(FD_ISSET(sid, &readfds)){
int newsocketfd = accept(sid, (struct sockaddr*)&client_addr, &addrsize);
if(newsocketfd == -1){
if(errno == 4){
printf("SIGINT recieved in accept\n");
myhandler(SIGINT);
}else{
perror("server accept");
exit(1);
}
}else{
s->forkThreadForClient(newsocketfd);
}
}
//non working snippet
//setup clients socket with select functionality
struct timeval ctv;
ctv.tv_sec = 0;
ctv.tv_usec = 500000;
fd_set creadfds;
FD_ZERO(&creadfds);
FD_SET(csid,&creadfds);
for(;;){
//check if listening socket has any client requrests, timeout at 500 ms
int numsockets = select(csid+1,&creadfds,NULL,NULL,&ctv);
if(numsockets == -1){
if(errno == 4){
printf("SIGINT recieved in client select\n");
FD_ZERO(&creadfds);
myhandler(SIGINT);
}else{
perror("server select");
exit(1);
}
}else{
printf("Select returned %i\n",numsockets);
}
if(FD_ISSET(csid,&creadfds)){
//read header
unsigned char header[11];
for(int i=0;i<11;i++){
if(recv(csid, rubyte, 1, 0) != 0){
printf("Received %X from client\n",*rubyte);
header[i] = *rubyte;
}
}
Any help would be appreciated.
Thanks for the responses, but I don't believe it has much todo with the timeout value being inside the loop. I tested it and even with tv being reset and the fd_set being zeroed every time the server loops, select still returns 1 immediately. I feel like there's a problem with how select is treating my TCP socket. Any time I set selects highest socket id to encompass my TCP socket, it returns immediately with that socket set. Also, client does not send anything, just connects.
One thing you must do is reset the value of tv to your desired timeout every time before you call select(). The select() function changes the values in tv to indicate how much time is left in the timeout, after returning from the function. If you fail to do this, your select() calls will end up using a timeout of zero, which is not efficient.
Some other operating systems implement select() differently, in such a way that they don't change the value of tv. Linux does change it, so you must reset it.
Move
FD_ZERO(&creadfds);
FD_SET(csid,&creadfds);
into the loop. The function select() reports the result in this structure. You already retrieve the result with
FD_ISSET(csid,&creadfds);

C++ socket windows

I have question.
I create socket , connect , send bytes , all is ok.
and for receiving data i use recv function.
char * TOReceive= new char[200];
recv(ConnectSocket, TOReceive , 200, 0);
when there are some data it reads and retuns, succefull , and when no data waits for data, all i need to limit waiting time, for example if 10 seconds no data it should return.
Many Thanks.
Windows sockets has the select function. You pass it the socket handle and a socket to check for readability, and a timeout, and it returns telling whether the socket became readable or whether the timeout was reached.
See: http://msdn.microsoft.com/en-us/library/ms740141(VS.85).aspx
Here's how to do it:
bool readyToReceive(int sock, int interval = 1)
{
fd_set fds;
FD_ZERO(&fds);
FD_SET(sock, &fds);
timeval tv;
tv.tv_sec = interval;
tv.tv_usec = 0;
return (select(sock + 1, &fds, 0, 0, &tv) == 1);
}
If it returns true, your next call to recv should return immediately with some data.
You could make this more robust by checking select for error return values and throwing exceptions in those cases. Here I just return true if it says one handle is ready to read, but that means I return false under all other circumstances, including the socket being already closed.
You have to call the select function prior to calling recv to know if there is something to be read.
You can use SO_RCVTIMEO socket option to specify the timeout value for recv() call.