I implemented a program that receives from one socket and sends/receives from the other socket.
For this i use polling of select(), in socket 1, i receive data at a high data rate, while in the other socket i receive periodic message and requests to receive data from the first socket.
When there is no request "from socket 2" to delegate the data from socket 1 to socket2 , i receive data from socket 1 normal and with no problem. However, say i received two requests "socket 2" while data is being received in socket 1, the second request breaks the the data reception as if it could no longer keep up with rate "rate isn't high really is only 150 Hz".
The pseudo code i do in the main():
fd_set readfds, rd_fds, writefds, wr_fds;
struct timeval tv;
do
{
do
{
rd_fds = readfds;
wr_fds = writefds;
FD_ZERO (&rd_fds);
FD_SET (sock1, &rd_fds);
FD_SET (sock2, &rd_fds);
FD_SET (sock1, &wr_fds);
tv.tv_sec = 0;
tv.tv_usec = 20;
int ls = sock2 + 1;
rslt = select (ls, &rd_fds, &wr_fds, NULL, &tv);
}
while (rslt == -1 && errno == EINTR);
if (FD_ISSET (sock1, &rd_fds))
{
rs1 = recvfrom (sock1, buff, size of the buff, ....);
if (rs1 > 0)
{
if (rs1 = alive message)
{
/* system is alive; */
}
else if (rs1 == request message)
{
/* store Request info (list or vector) */
}
else {}
}
}
if (FD_ISSET (StructArg.sock2, &rd_fds))
{
rs2 = recv (sock2, ..., 0);
if (rs2 > 0)
{
if ( /* Message (high rate) is from sock 2 */ )
{
/* process this message and do some computation */
int sp1 = sendto (sock1, .....);
if (sp1 < 0)
{
perror ("Failed data transmission ");
}
else
{
/* increase some counters */
}
}
}
}
if (FD_ISSET (sock1, &wr_fds))
{
/*
if there info stored in the list
do some calculaitons then send to sock 1
*/
if (sendto (sock1, ... ...) < 0)
{
perror ("Failed data transmission");
}
else
{
/* increase counter */
}
}
FD_CLR (sock1, &rd_fds);
FD_CLR (sock2, &rd_fds);
}
while (1);
Again, the question is, why does receiving from sock1 is interrupted if a request is received from sock2, while i am receiving from sock1 (fast messages), i expect interleaved messages in the output based on the timestamps in the message.
Note that nearly all socket functions can block execution unless you've created the socket with the O_NONBLOCK option:
http://pubs.opengroup.org/onlinepubs/009695399/functions/sendto.html
And you'll also have to handle the case where recvfrom only gives you a partial read - unless you use MSG_WAITALL:
http://pubs.opengroup.org/onlinepubs/009695399/functions/recvfrom.html
Personally, I'd use a multi-threaded implementation which can have threads just sit and wait for data on each socket.
As to your final question:
why does receiving from sock1 is interrupted if a request is received from sock2, while i am receiving from sock1 (fast messages), i expect interleaved messages in the output based on the timestamps in the message.
You are slave to the network stack's implementation and there are nearly no guarantees about the sending or receiving of data on one socket relative to another. You are only guaranteed that the data within a socket is properly ordered.
I expect interleaved messages in the output based on the timestamps in the message.
Your expectation is without foundation. If there is data in either socket receive buffer, select() will fire. That's all you can rely on. You don't have any guarantee about timestamps being observed and ordered as between multiple sockets.
Related
I'm new in socket programming. I'm trying to make a TCP listener which can handle multiple connections.
I found this example which seems pretty usefull.
The problem with this code is that it sends data to the connected clients only when data received. I want to send data to the connected clients asynchonously.
I see that the select() function blocks the code forever until an event arrives at the socket.
I was thinking to put a delay (instead of a NULL) in select() function so that it timeouts every some microseconds and the program would be able to send data if there's some. At the line if (buffer[0] > 0)
The question is: Is there any better way to do what I want? Can I force select() to timeout with other way?
The char buffer[1025]; is a global array that is filled from another thread. The while(TRUE) below is running on the other thread.
while(TRUE)
{
//clear the socket set
FD_ZERO(&readfds);
//add master socket to set
FD_SET(master_socket, &readfds);
max_sd = master_socket;
//add child sockets to set
for ( i = 0 ; i < max_clients ; i++)
{
//socket descriptor
sd = client_socket[i];
//if valid socket descriptor then add to read list
if(sd > 0)
FD_SET( sd , &readfds);
//highest file descriptor number, need it for the select function
if(sd > max_sd)
max_sd = sd;
}
//wait for an activity on one of the sockets , timeout is NULL ,
//so wait indefinitely
struct timeval timeout;
timeout.tv_usec = 10000;
activity = select( max_sd + 1 , &readfds , NULL , NULL , &timeout);
if ((activity < 0) && (errno!=EINTR))
{
cout << "select error" << endl;
}
//If something happened on the master socket ,
//then its an incoming connection
if (FD_ISSET(master_socket, &readfds))
{
if ((new_socket = accept(master_socket,
(struct sockaddr *)&address, (socklen_t*)&addrlen))<0)
{
perror("accept");
exit(EXIT_FAILURE);
}
//inform user of socket number - used in send and receive commands
printf("New connection , socket fd is %d , ip is : %s , port : %d \n" , new_socket , inet_ntoa(address.sin_addr) , ntohs(address.sin_port));
//send new connection greeting message
if( send(new_socket, message, strlen(message), 0) != strlen(message) )
{
perror("send");
}
puts("Welcome message sent successfully");
//add new socket to array of sockets
for (i = 0; i < max_clients; i++)
{
//if position is empty
if( client_socket[i] == 0 )
{
client_socket[i] = new_socket;
printf("Adding to list of sockets as %d\n" , i);
break;
}
}
}
if (buffer[0] > 0)
{
sd = client_socket[0];
send(sd , buffer , strlen(buffer) , 0 );
}
//else its some IO operation on some other socket
for (i = 0; i < max_clients; i++)
{
sd = client_socket[i];
if (FD_ISSET( sd , &readfds))
{
//Check if it was for closing , and also read the
//incoming message
if ((valread = read( sd , buffer, 1024)) == 0)
{
//Somebody disconnected , get his details and print
getpeername(sd , (struct sockaddr*)&address , \
(socklen_t*)&addrlen);
printf("Host disconnected , ip %s , port %d \n" ,
inet_ntoa(address.sin_addr) , ntohs(address.sin_port));
//Close the socket and mark as 0 in list for reuse
close( sd );
client_socket[i] = 0;
}
//Echo back the message that came in
else
{
//set the string terminating NULL byte on the end
//of the data read
buffer[valread] = '\0';
send(sd , buffer , strlen(buffer) , 0 );
}
}
}
}
The problem with this code is that it sends data to the connected clients only when data received.
That is because the original code implements a request/response server - in this case, it is specifically an echo server.
I want to send data to the connected clients asynchonously.
You may want to first enumerate the condition(s)/trigger(s) under which the server starts sending data asynchronously. One example, that I'm making up impromptu, is a multi-echo server. Once such a server receives a string from the client, it echoes it 3 times, each echo separated by (say) 1 second.
In such a case, the select time-out can be used to break the wait and the "if buffer[0] > 0" check can be modified to send any pending echoes.
The question is: Is there any better way to do what I want? Can I force select() to timeout with other way?
The man-page indicates that there is a third way to break out of select, by arranging for a signal to be delivered. But, I think, that should be used for handling signals delivered to the server program and only the select timeout for dealing with asynchronous IO. More accurately, I have seen production grade servers that use the epoll/kqueue (better alternatives to select) timeout in their async loop. So, your approach seems quite reasonable.
It's quite common for single-threaded servers that have a big loop around a call to select or poll to have a list of "timers" -- things that have to be done at a particular time. When there's nothing the code needs to do, you check how long it is until the list of timers indicates that something needs to be done and you call select or poll specifying that length of time as the timeout.
So your loop tends to look like this:
Check the list of timers and calculate how long until we need to do something.
Call select or poll specifying that as the timeout.
If we discovered any sockets are ready for operations, do those operations.
If it's time to execute any timers, execute them.
Go to step 1.
If you need to do something periodically, set an initial timer to do that thing the first time. Then have the timer's execution code set a new timer to do the thing again.
I remake two detached threads to poll() cycle in server: for receiving and sending data.
All work fine, until setting client's sending frequency to 16 ms (< ~200 ms). In this state one thread always wins the race and answers to only one client with ~1 us ping. What I need to do to send and receive data in poll() with two (or one) UDP sockets?
Part of server's code:
struct pollfd pollStruct[2];
// int timeout_sec = 1;
pollStruct[0].fd = getReceiver()->getSocketDesc();
pollStruct[0].events = POLLIN;
pollStruct[0].revents = 0;
pollStruct[1].fd = getDispatcher()->getSocketDesc();
pollStruct[1].events = POLLOUT;
pollStruct[1].revents = 0;
while(true) {
if (poll(pollStruct, 2, 0 /* (-1 / timeout) */) > 0) {
if (pollStruct[0].revents & POLLIN) {
recvfrom(getReceiver()->getSocketDesc(), buffer, getBufferLength(), 0, (struct sockaddr *) ¤tClient, getReceiver()->getLength());
// add message to client's own thread-safe buffer
}
if (pollStruct[1].revents & POLLOUT) {
// get message from thread-safe general buffer after processing
// dequeue one
if (!getBuffer().isEmpty()) {
auto message = getBuffer().dequeue();
if (message != nullptr) {
sendto(getDispatcher()->getSocketDesc(), "hi", 2, 0, message->_addr, *getDispatcher()->getLength());
}
}
}
}
}
I can't send too large data packets over my setup (currently sending to 127.0.0.1), at about 30kB this functionality starts to fail. For testing I have an application that just starts a Receiver and a Sender, starts two threads, one for the sending, one for receiving, and when both have finished, compares if the sending string is the same as the received string.
void SenderThread(int count)
{
messageOut = "";
messageOut.append(count, 'A');
sender->sendData(messageOut);
}
void ReceivingThread()
{
receiver->ReceiveData(message);
}
main()
{
receiver = new utility::Receiver();
sender = new utility::Sender();
receiver->startSocket(9000);
sender->connectToSocket("127.0.0.1", 9000);
receiver->accept();
for (int count = 100; count < 1024 * 1024; count += 100)
{
std::thread sendThread(SenderThread, count);
std::thread recvThread(ReceivingThread);
sendThread.join();
recvThread.join();
printf("Sent data of length %d ", messageOut.length());
if (message == messageOut)
printf("successfully.\n");
else
{
printf("not successfully.\n");
printf("Length of original message: %d, Length of received message: %d.\n", messageOut.length(), message.length());
break;
}
}
delete receiver;
delete sender;
}
I have following code for my sending socket:
bool utility::Sender::sendData(const std::string & message)
{
int numBytes = 0;
int totalSent = 0;
// Break condition: send() fails, or whole message was transfered
while (totalSent < message.length() && send(message.substr(totalSent).c_str(), message.length() - totalSent, numBytes))
{
totalSent += numBytes;
}
return false;
}
bool utility::Sender::send(const char* pBuffer, int32_t lengthOfBuffer, int32_t &numBytes)
{
numBytes = ::send(connectSocket, pBuffer, lengthOfBuffer, 0);
if (numBytes == SOCKET_ERROR)
return false;
return true;
}
The receiving side:
bool utility::Receiver::ReceiveData(std::string& message)
{
int32_t numBytes = 0;
char data[defaultBufferLength];
// Set to blocking for the first data package
u_long iMode = 0;
ioctlsocket(tcpSocket, FIONBIO, &iMode);
bool success = receive(data, defaultBufferLength, numBytes);
message = std::string(data, numBytes);
// Set to non-blocking for the rest of the journey
iMode = 1;
ioctlsocket(tcpSocket, FIONBIO, &iMode);
while (numBytes == defaultBufferLength && receive(data, defaultBufferLength, numBytes))
{
message.append(data, numBytes);
}
return success;
}
bool utility::Receiver::receive(char* pBuffer, int32_t lengthOfBuffer, int32_t& numBytes)
{
int32_t flags = 0;
numBytes = recv(tcpSocket, pBuffer, lengthOfBuffer, flags);
if (numBytes == -1)
{
numBytes = 0;
if (errno == EAGAIN || errno == EWOULDBLOCK)
return false;
else
close();
}
return true;
}
The output I am getting is
Sent data of length 39200 successfully.
Sent data of length 39300 successfully.
Sent data of length 39400 successfully.
Sent data of length 39500 successfully.
Sent data of length 39600 successfully.
Sent data of length 39700 successfully.
Sent data of length 39800 successfully.
Sent data of length 39900 successfully.
Sent data of length 40000 successfully.
Sent data of length 40100 successfully.
Sent data of length 40200 successfully.
Sent data of length 40300 successfully.
Sent data of length 40400 successfully.
Sent data of length 40500 not successfully.
Length of original message: 40500, Length of received message: 29200.
The thing which is the most irritating, and probably the cause of this, is the ::send(...). I can give it 2 MB of char*, and it will just send it in one swoop (but the receiver fails miserably). What can I do about that?
TCP is a byte-oriented protocol, not message oriented.
send does not create a message. recv does not receive a message. They work on blocks of bytes, and multiple send calls can be combined at the network layer (for efficiency) or broken into multiple TCP packets. In practice, even if you turn off Nagle's algorithm, if a frame is lost at the physical layer and TCP has to retry the transmission, the retransmit will include as much data added to the buffer afterward as it can fit in an outgoing datagram.
So you can't rely on any particular mapping between send calls and recv calls. The only guarantee is that the bytes are delivered to your socket in the same order they were sent. If boundaries are important, you have to create them yourself. Length prefixes are popular in combination with TCP, special framing sequences less so.
You do already have a loop for reassembling messages... but you break out of the loop when you see EAGAIN / EWOULDBLOCK or a partly filled buffer, and continue processing. That's a problem, because you only have a partial message at that point. You need a way to delay processing until you have a complete message.
Adding to ben-voigt answer, you need to create an higher level message system for your socket, so you can send to the server the message size and on your socket receive method create a session or buffer storage that you append the received data till the message size match the total data received, once that requirement is met then you can process the data
I have written simple C/S applications to test the characteristics of non-blocking sockets, here is some brief information about the server and client:
//On linux The server thread will send
//a file to the client using non-blocking socket
void *SendFileThread(void *param){
CFile* theFile = (CFile*) param;
int sockfd = theFile->GetSocket();
set_non_blocking(sockfd);
set_sock_sndbuf(sockfd, 1024 * 64); //set the send buffer to 64K
//get the total packets count of target file
int PacketCOunt = theFile->GetFilePacketsCount();
int CurrPacket = 0;
while (CurrPacket < PacketCount){
char buffer[512];
int len = 0;
//get packet data by packet no.
GetPacketData(currPacket, buffer, len);
//send_non_blocking_sock_data will loop and send
//data into buffer of sockfd until there is error
int ret = send_non_blocking_sock_data(sockfd, buffer, len);
if (ret < 0 && errno == EAGAIN){
continue;
} else if (ret < 0 || ret == 0 ){
break;
} else {
currPacket++;
}
......
}
}
//On windows, the client thread will do something like below
//to receive the file data sent by the server via block socket
void *RecvFileThread(void *param){
int sockfd = (int) param; //blocking socket
set_sock_rcvbuf(sockfd, 1024 * 256); //set the send buffer to 256
while (1){
struct timeval timeout;
timeout.tv_sec = 1;
timeout.tv_usec = 0;
fd_set rds;
FD_ZERO(&rds);
FD_SET(sockfd, &rds)'
//actually, the first parameter of select() is
//ignored on windows, though on linux this parameter
//should be (maximum socket value + 1)
int ret = select(sockfd + 1, &rds, NULL, NULL, &timeout );
if (ret == 0){
// log that timer expires
CLogger::log("RecvFileThread---Calling select() timeouts\n");
} else if (ret) {
//log the number of data it received
int ret = 0;
char buffer[1024 * 256];
int len = recv(sockfd, buffer, sizeof(buffer), 0);
// handle error
process_tcp_data(buffer, len);
} else {
//handle and break;
break;
}
}
}
What surprised me is that the server thread fails frequently because of socket buffer full, e.g. to send a file of 14M size it reports 50000 failures with errno = EAGAIN. However, via logging I observed there are tens of timeouts during the transfer, the flow is like below:
on the Nth loop, select() succeeds and read 256K's data successfully.
on the (N+1)th loop, select() failed with timeout.
on the (N+2)th loop, select() succeeds and read 256K's data successfully.
Why there would be timeouts interleaved during the receving? Can anyone explain this phenomenon?
[UPDATE]
1. Uploading a file of 14M to the server only takes 8 seconds
2. Using the same file with 1), the server takes nearly 30 seconds to send all data to the client.
3. All sockets used by the client are blocking. All sockets used by the server are non-blocking.
Regarding #2, I think timeouts are the reason why #2 takes much more time then #1, and I wonder why there would be so many timeouts when the client is busy in receiving data.
[UPDATE2]
Thanks for comments from #Duck, #ebrob, #EJP, #ja_mesa , I will do more investigation today
then update this post.
Regarding why I send 512 bytes per loop in the server thread, it is because I found the server thread sends data much faster than the client thread receiving them. I am very confused that why timeout happened to the client thread.
Consider this more of a long comment than an answer but as several people have noted the network is orders of magnitude slower than your processor. The point of non-blocking i/o is that the difference is so great that you can actually use it to do real work rather than blocking. Here you are just pounding on the elevator button hoping that makes a difference.
I'm not sure how much of your code is real and how much is chopped up for posting but in the server you don't account for (ret == 0) i.e. normal shutdown by the peer.
The select in the client is wrong. Again, not sure if that was sloppy editing or not but if not then the number of parameters are wrong but, more concerning, the first parameter - i.e. should be the highest file descriptor for select to look at plus one - is zero. Depending on the implementation of select I wonder if that is in fact just turning select into a fancy sleep statement.
You should be calling recv() first and then call select() only if recv() tells you to do so. Don't call select() first, that is a waste of processing. recv() knows if data is immediately available or if it has to wait for data to arrive:
void *RecvFileThread(void *param){
int sockfd = (int) param; //blocking socket
set_sock_rcvbuf(sockfd, 1024 * 256); //set the send buffer to 256
char buffer[1024 * 256];
while (1){
int ret = 0;
int len = recv(sockfd, buffer, sizeof(buffer), 0);
if (len == -1) {
if (WSAGetLastError() != WSAEWOULDBLOCK) {
//handle error
break;
}
struct timeval timeout;
timeout.tv_sec = 1;
timeout.tv_usec = 0;
fd_set rds;
FD_ZERO(&rds);
FD_SET(sockfd, &rds)'
//actually, the first parameter of select() is
//ignored on windows, though on linux this parameter
//should be (maximum socket value + 1)
int ret = select(sockfd + 1, &rds, NULL, &timeout );
if (ret == -1) {
// handle error
break;
}
if (ret == 0) {
// log that timer expires
break;
}
// socket is readable so try read again
continue;
}
if (len == 0) {
// handle graceful disconnect
break;
}
//log the number of data it received
process_tcp_data(buffer, len);
}
}
Do something similar on the sending side as well. Call send() first, and then call select() waiting for writability only if send() tells you to do so.
I'm doing a c++ project that requires a server to create a new thread to handle connections each time accept() returns a new socket descriptor. I am using select to decide when a connection attempt has taken place as well as when a client has sent data over the newly created client socket (the one that accept creates). So two functions and two selects - one for polling the socket dedicated to listening for connections, one for polling the socket created when a new connection is successful.
The behavior of the first case is what I expect - FD_ISSET returns true for the id of my listening socket only when a connection is requested, and is false until the next connection attempt. The second case does not work, even though the code is exactly the same with different fd_set and socket objects. I'm wondering if this stems from the TCP socket? Do these sockets always return true when polled by a select due to their streamy nature?
//working snippet
struct timeval tv;
tv.tv_sec = 0;
tv.tv_usec = 500000;
fd_set readfds;
FD_ZERO(&readfds);
FD_SET(sid,&readfds);
//start server loop
for(;;){
//check if listening socket has any client requrests, timeout at 500 ms
int numsockets = select(sid+1,&readfds,NULL,NULL,&tv);
if(numsockets == -1){
if(errno == 4){
printf("SIGINT recieved in select\n");
FD_ZERO(&readfds);
myhandler(SIGINT);
}else{
perror("server select");
exit(1);
}
}
//check if listening socket is ready to be read after select returns
if(FD_ISSET(sid, &readfds)){
int newsocketfd = accept(sid, (struct sockaddr*)&client_addr, &addrsize);
if(newsocketfd == -1){
if(errno == 4){
printf("SIGINT recieved in accept\n");
myhandler(SIGINT);
}else{
perror("server accept");
exit(1);
}
}else{
s->forkThreadForClient(newsocketfd);
}
}
//non working snippet
//setup clients socket with select functionality
struct timeval ctv;
ctv.tv_sec = 0;
ctv.tv_usec = 500000;
fd_set creadfds;
FD_ZERO(&creadfds);
FD_SET(csid,&creadfds);
for(;;){
//check if listening socket has any client requrests, timeout at 500 ms
int numsockets = select(csid+1,&creadfds,NULL,NULL,&ctv);
if(numsockets == -1){
if(errno == 4){
printf("SIGINT recieved in client select\n");
FD_ZERO(&creadfds);
myhandler(SIGINT);
}else{
perror("server select");
exit(1);
}
}else{
printf("Select returned %i\n",numsockets);
}
if(FD_ISSET(csid,&creadfds)){
//read header
unsigned char header[11];
for(int i=0;i<11;i++){
if(recv(csid, rubyte, 1, 0) != 0){
printf("Received %X from client\n",*rubyte);
header[i] = *rubyte;
}
}
Any help would be appreciated.
Thanks for the responses, but I don't believe it has much todo with the timeout value being inside the loop. I tested it and even with tv being reset and the fd_set being zeroed every time the server loops, select still returns 1 immediately. I feel like there's a problem with how select is treating my TCP socket. Any time I set selects highest socket id to encompass my TCP socket, it returns immediately with that socket set. Also, client does not send anything, just connects.
One thing you must do is reset the value of tv to your desired timeout every time before you call select(). The select() function changes the values in tv to indicate how much time is left in the timeout, after returning from the function. If you fail to do this, your select() calls will end up using a timeout of zero, which is not efficient.
Some other operating systems implement select() differently, in such a way that they don't change the value of tv. Linux does change it, so you must reset it.
Move
FD_ZERO(&creadfds);
FD_SET(csid,&creadfds);
into the loop. The function select() reports the result in this structure. You already retrieve the result with
FD_ISSET(csid,&creadfds);