I can't send too large data packets over my setup (currently sending to 127.0.0.1), at about 30kB this functionality starts to fail. For testing I have an application that just starts a Receiver and a Sender, starts two threads, one for the sending, one for receiving, and when both have finished, compares if the sending string is the same as the received string.
void SenderThread(int count)
{
messageOut = "";
messageOut.append(count, 'A');
sender->sendData(messageOut);
}
void ReceivingThread()
{
receiver->ReceiveData(message);
}
main()
{
receiver = new utility::Receiver();
sender = new utility::Sender();
receiver->startSocket(9000);
sender->connectToSocket("127.0.0.1", 9000);
receiver->accept();
for (int count = 100; count < 1024 * 1024; count += 100)
{
std::thread sendThread(SenderThread, count);
std::thread recvThread(ReceivingThread);
sendThread.join();
recvThread.join();
printf("Sent data of length %d ", messageOut.length());
if (message == messageOut)
printf("successfully.\n");
else
{
printf("not successfully.\n");
printf("Length of original message: %d, Length of received message: %d.\n", messageOut.length(), message.length());
break;
}
}
delete receiver;
delete sender;
}
I have following code for my sending socket:
bool utility::Sender::sendData(const std::string & message)
{
int numBytes = 0;
int totalSent = 0;
// Break condition: send() fails, or whole message was transfered
while (totalSent < message.length() && send(message.substr(totalSent).c_str(), message.length() - totalSent, numBytes))
{
totalSent += numBytes;
}
return false;
}
bool utility::Sender::send(const char* pBuffer, int32_t lengthOfBuffer, int32_t &numBytes)
{
numBytes = ::send(connectSocket, pBuffer, lengthOfBuffer, 0);
if (numBytes == SOCKET_ERROR)
return false;
return true;
}
The receiving side:
bool utility::Receiver::ReceiveData(std::string& message)
{
int32_t numBytes = 0;
char data[defaultBufferLength];
// Set to blocking for the first data package
u_long iMode = 0;
ioctlsocket(tcpSocket, FIONBIO, &iMode);
bool success = receive(data, defaultBufferLength, numBytes);
message = std::string(data, numBytes);
// Set to non-blocking for the rest of the journey
iMode = 1;
ioctlsocket(tcpSocket, FIONBIO, &iMode);
while (numBytes == defaultBufferLength && receive(data, defaultBufferLength, numBytes))
{
message.append(data, numBytes);
}
return success;
}
bool utility::Receiver::receive(char* pBuffer, int32_t lengthOfBuffer, int32_t& numBytes)
{
int32_t flags = 0;
numBytes = recv(tcpSocket, pBuffer, lengthOfBuffer, flags);
if (numBytes == -1)
{
numBytes = 0;
if (errno == EAGAIN || errno == EWOULDBLOCK)
return false;
else
close();
}
return true;
}
The output I am getting is
Sent data of length 39200 successfully.
Sent data of length 39300 successfully.
Sent data of length 39400 successfully.
Sent data of length 39500 successfully.
Sent data of length 39600 successfully.
Sent data of length 39700 successfully.
Sent data of length 39800 successfully.
Sent data of length 39900 successfully.
Sent data of length 40000 successfully.
Sent data of length 40100 successfully.
Sent data of length 40200 successfully.
Sent data of length 40300 successfully.
Sent data of length 40400 successfully.
Sent data of length 40500 not successfully.
Length of original message: 40500, Length of received message: 29200.
The thing which is the most irritating, and probably the cause of this, is the ::send(...). I can give it 2 MB of char*, and it will just send it in one swoop (but the receiver fails miserably). What can I do about that?
TCP is a byte-oriented protocol, not message oriented.
send does not create a message. recv does not receive a message. They work on blocks of bytes, and multiple send calls can be combined at the network layer (for efficiency) or broken into multiple TCP packets. In practice, even if you turn off Nagle's algorithm, if a frame is lost at the physical layer and TCP has to retry the transmission, the retransmit will include as much data added to the buffer afterward as it can fit in an outgoing datagram.
So you can't rely on any particular mapping between send calls and recv calls. The only guarantee is that the bytes are delivered to your socket in the same order they were sent. If boundaries are important, you have to create them yourself. Length prefixes are popular in combination with TCP, special framing sequences less so.
You do already have a loop for reassembling messages... but you break out of the loop when you see EAGAIN / EWOULDBLOCK or a partly filled buffer, and continue processing. That's a problem, because you only have a partial message at that point. You need a way to delay processing until you have a complete message.
Adding to ben-voigt answer, you need to create an higher level message system for your socket, so you can send to the server the message size and on your socket receive method create a session or buffer storage that you append the received data till the message size match the total data received, once that requirement is met then you can process the data
Related
I have implemented a secure websocket client (with OpenSSL encryption) in C++. The problem is, for received messages with size above 16k bytes, the client does not receive the ping message from server separately, rather the ping message is appended at the tail end of the preceding data message. As a result, the client does not parse the ping message, does not send a PONG reply and server closes the connection on timeout.
I am using a python based websocket server to test my client application. As far as I have tested, if the packet size sent from server is below 16k bytes, the pings are correctly received consistently during the test.
This is my read_from_websocket function
int read_from_websocket(char *recv_buff)
{
uint buf_capacity = INT_MAX;
uint buf_offset = 0;
uint read_count;
do
{
read_count = ssl_read(recv_buff + buf_offset, buf_capacity);
if (read_count == P_FD_ERR) // P_FD_ERR = -1
{
return -1;
}
else if (read_count == P_FD_PENDING) // P_FD_PENDING = -2
{
break;
}
if (read_count == 0)
{
break; // EOF
}
buf_offset += read_count;
buf_capacity -= read_count;
} while (buf_capacity);
return INT_MAX - buf_capacity;
}
/**SSL read function being called by the above websocket read function**/
int ssl_read(void *buf, size_t count)
{
int len = SSL_read(ssl, buf, count);
if (len < 0)
{
int err = SSL_get_error(ssl, len);
if (err == SSL_ERROR_WANT_READ)
{
return P_FD_PENDING;
}
else
{
return P_FD_ERR;
}
}
return len;
}
I don't understand:
how to ensure that a single call to read_from_websocket() always returns only one complete message
why does this only happen in case of incoming ping messages and only when the preceding message is greater than 16k bytes long
whether the server side can be a culprit here since I'm using an off-the-shelf server code
I am trying to check if a client has send some new data. This actually tells me that i always have new data:
bool ClientHandle::hasData()
{
fd_set temp;
FD_ZERO(&temp);
FD_SET(m_sock, &temp);
//setup the timeout to 1000ms
struct timeval tv;
tv.tv_sec = 0;
tv.tv_usec = 1000;
//temp.fd_count possible?
if (select(m_sock+1, &temp, nullptr, nullptr, &tv) == -1)
{
return false;
}
if (FD_ISSET(m_sock, &temp))
return true;
return false;
}
I am connecting with a java client and send a "connection" message which i read inside of the ctor:
ClientHandle::ClientHandle(SOCKET s) : m_sock(s)
{
while (!hasData())
{
}
char buffer[5];
recv(m_sock, buffer, 4, NULL);
auto i = atoi(buffer);
LOG_INFO << "Byte to receive: " << i;
auto dataBuffer = new char[i + 1]{'\0'};
recv(m_sock, dataBuffer, i, NULL);
LOG_INFO << dataBuffer;
//clean up
delete[] dataBuffer;
}
This seems to work right. After that i keep checking if there is new data which always is true even if the java client does not send any new data.
Here is the java client. Don't judge me it's just for checking the connections. It wont stay like this to send the size information as char[].
public static void main(String[] args) throws UnknownHostException,
IOException {
Socket soc = null;
soc = new Socket("localhost", 6060);
PrintWriter out = new PrintWriter(soc.getOutputStream(), true);
BufferedReader in = new BufferedReader(new InputStreamReader(
soc.getInputStream()));
if (soc != null)
System.out.println("Connected");
out.write("10\0");
out.flush();
out.write("newCon\0");
out.flush();
out.close();
in.close();
soc.close();
}
So what is wrong with the hasData FD_ISSET method?
So what is wrong with the hasData FD_ISSET method?
Actually no. There is a problem with your use of recv().
recv() will return 0 if the client is disconnected and will return this until you close the socket (server-side). You can find this information in the manual.
Even if recv() returns 0, it will "trigger" select().
Knowing that, it's easy to find out the problem: you never check the return value of recv() and so you're unable to say if the client is still connected or not. However, you still add it with FD_SET!
#include <sys/types.h> // for ssize_t
#include <stdio.h> // for perror()
ClientHandle::ClientHandle(SOCKET s) : m_sock(s)
{
while (!hasData())
{
}
char buffer[5];
ssize_t ret = recv(m_sock, buffer, 4, NULL);
if (ret == -1) // error
{
perror("recv");
return ;
}
else if (ret == 0) // m_sock disconnects
{
close(m_sock);
// DO NOT FD_SET m_sock since the socket is now closed
}
else
{
auto i = atoi(buffer);
LOG_INFO << "Byte to receive: " << i;
auto dataBuffer = new char[i + 1]{'\0'};
recv(m_sock, dataBuffer, i, NULL);
LOG_INFO << dataBuffer;
//clean up
delete[] dataBuffer;
}
}
From Steven's book UNIX Networking Programming:
A socket is ready for reading if any of the following four conditions is true:
The number of bytes of data in the socket receive buffer is greater than or equal to the current size of the low-water mark for the socket receive buffer. A read operation on the socket will not block and will return a value greater than 0 (i.e., the data that is ready to be read). We can set this low-water mark using the SO_RCVLOWAT socket option. It defaults to 1 for TCP and UDP sockets.
The read half of the connection is closed (i.e., a TCP connection that has received a FIN). A read operation on the socket will not block and will return 0 (i.e., EOF).
The socket is a listening socket and the number of completed connections is nonzero. An accept on the listening socket will normally not block, although we will describe a timing condition in Section 16.6 under which the accept can block.
A socket error is pending. A read operation on the socket will not block and will return an error (–1) with errno set to the specific error condition. These pending errors can also be fetched and cleared by calling getsockopt and specifying the SO_ERROR socket option.
ISSET is going to return true in all the cases above. After your Java client closes the connection, the socket will be ready for reading in the server.
In ClientHandle::ClientHandle you are not checking the return value of recv and if any data is returned.
Is it blocking in the second call to recv?
You don't check the return value of recv and you don't handle receiving fewer bytes than you asked for. So what do you expect to happen when the connection is closed?
I implemented a program that receives from one socket and sends/receives from the other socket.
For this i use polling of select(), in socket 1, i receive data at a high data rate, while in the other socket i receive periodic message and requests to receive data from the first socket.
When there is no request "from socket 2" to delegate the data from socket 1 to socket2 , i receive data from socket 1 normal and with no problem. However, say i received two requests "socket 2" while data is being received in socket 1, the second request breaks the the data reception as if it could no longer keep up with rate "rate isn't high really is only 150 Hz".
The pseudo code i do in the main():
fd_set readfds, rd_fds, writefds, wr_fds;
struct timeval tv;
do
{
do
{
rd_fds = readfds;
wr_fds = writefds;
FD_ZERO (&rd_fds);
FD_SET (sock1, &rd_fds);
FD_SET (sock2, &rd_fds);
FD_SET (sock1, &wr_fds);
tv.tv_sec = 0;
tv.tv_usec = 20;
int ls = sock2 + 1;
rslt = select (ls, &rd_fds, &wr_fds, NULL, &tv);
}
while (rslt == -1 && errno == EINTR);
if (FD_ISSET (sock1, &rd_fds))
{
rs1 = recvfrom (sock1, buff, size of the buff, ....);
if (rs1 > 0)
{
if (rs1 = alive message)
{
/* system is alive; */
}
else if (rs1 == request message)
{
/* store Request info (list or vector) */
}
else {}
}
}
if (FD_ISSET (StructArg.sock2, &rd_fds))
{
rs2 = recv (sock2, ..., 0);
if (rs2 > 0)
{
if ( /* Message (high rate) is from sock 2 */ )
{
/* process this message and do some computation */
int sp1 = sendto (sock1, .....);
if (sp1 < 0)
{
perror ("Failed data transmission ");
}
else
{
/* increase some counters */
}
}
}
}
if (FD_ISSET (sock1, &wr_fds))
{
/*
if there info stored in the list
do some calculaitons then send to sock 1
*/
if (sendto (sock1, ... ...) < 0)
{
perror ("Failed data transmission");
}
else
{
/* increase counter */
}
}
FD_CLR (sock1, &rd_fds);
FD_CLR (sock2, &rd_fds);
}
while (1);
Again, the question is, why does receiving from sock1 is interrupted if a request is received from sock2, while i am receiving from sock1 (fast messages), i expect interleaved messages in the output based on the timestamps in the message.
Note that nearly all socket functions can block execution unless you've created the socket with the O_NONBLOCK option:
http://pubs.opengroup.org/onlinepubs/009695399/functions/sendto.html
And you'll also have to handle the case where recvfrom only gives you a partial read - unless you use MSG_WAITALL:
http://pubs.opengroup.org/onlinepubs/009695399/functions/recvfrom.html
Personally, I'd use a multi-threaded implementation which can have threads just sit and wait for data on each socket.
As to your final question:
why does receiving from sock1 is interrupted if a request is received from sock2, while i am receiving from sock1 (fast messages), i expect interleaved messages in the output based on the timestamps in the message.
You are slave to the network stack's implementation and there are nearly no guarantees about the sending or receiving of data on one socket relative to another. You are only guaranteed that the data within a socket is properly ordered.
I expect interleaved messages in the output based on the timestamps in the message.
Your expectation is without foundation. If there is data in either socket receive buffer, select() will fire. That's all you can rely on. You don't have any guarantee about timestamps being observed and ordered as between multiple sockets.
I have written simple C/S applications to test the characteristics of non-blocking sockets, here is some brief information about the server and client:
//On linux The server thread will send
//a file to the client using non-blocking socket
void *SendFileThread(void *param){
CFile* theFile = (CFile*) param;
int sockfd = theFile->GetSocket();
set_non_blocking(sockfd);
set_sock_sndbuf(sockfd, 1024 * 64); //set the send buffer to 64K
//get the total packets count of target file
int PacketCOunt = theFile->GetFilePacketsCount();
int CurrPacket = 0;
while (CurrPacket < PacketCount){
char buffer[512];
int len = 0;
//get packet data by packet no.
GetPacketData(currPacket, buffer, len);
//send_non_blocking_sock_data will loop and send
//data into buffer of sockfd until there is error
int ret = send_non_blocking_sock_data(sockfd, buffer, len);
if (ret < 0 && errno == EAGAIN){
continue;
} else if (ret < 0 || ret == 0 ){
break;
} else {
currPacket++;
}
......
}
}
//On windows, the client thread will do something like below
//to receive the file data sent by the server via block socket
void *RecvFileThread(void *param){
int sockfd = (int) param; //blocking socket
set_sock_rcvbuf(sockfd, 1024 * 256); //set the send buffer to 256
while (1){
struct timeval timeout;
timeout.tv_sec = 1;
timeout.tv_usec = 0;
fd_set rds;
FD_ZERO(&rds);
FD_SET(sockfd, &rds)'
//actually, the first parameter of select() is
//ignored on windows, though on linux this parameter
//should be (maximum socket value + 1)
int ret = select(sockfd + 1, &rds, NULL, NULL, &timeout );
if (ret == 0){
// log that timer expires
CLogger::log("RecvFileThread---Calling select() timeouts\n");
} else if (ret) {
//log the number of data it received
int ret = 0;
char buffer[1024 * 256];
int len = recv(sockfd, buffer, sizeof(buffer), 0);
// handle error
process_tcp_data(buffer, len);
} else {
//handle and break;
break;
}
}
}
What surprised me is that the server thread fails frequently because of socket buffer full, e.g. to send a file of 14M size it reports 50000 failures with errno = EAGAIN. However, via logging I observed there are tens of timeouts during the transfer, the flow is like below:
on the Nth loop, select() succeeds and read 256K's data successfully.
on the (N+1)th loop, select() failed with timeout.
on the (N+2)th loop, select() succeeds and read 256K's data successfully.
Why there would be timeouts interleaved during the receving? Can anyone explain this phenomenon?
[UPDATE]
1. Uploading a file of 14M to the server only takes 8 seconds
2. Using the same file with 1), the server takes nearly 30 seconds to send all data to the client.
3. All sockets used by the client are blocking. All sockets used by the server are non-blocking.
Regarding #2, I think timeouts are the reason why #2 takes much more time then #1, and I wonder why there would be so many timeouts when the client is busy in receiving data.
[UPDATE2]
Thanks for comments from #Duck, #ebrob, #EJP, #ja_mesa , I will do more investigation today
then update this post.
Regarding why I send 512 bytes per loop in the server thread, it is because I found the server thread sends data much faster than the client thread receiving them. I am very confused that why timeout happened to the client thread.
Consider this more of a long comment than an answer but as several people have noted the network is orders of magnitude slower than your processor. The point of non-blocking i/o is that the difference is so great that you can actually use it to do real work rather than blocking. Here you are just pounding on the elevator button hoping that makes a difference.
I'm not sure how much of your code is real and how much is chopped up for posting but in the server you don't account for (ret == 0) i.e. normal shutdown by the peer.
The select in the client is wrong. Again, not sure if that was sloppy editing or not but if not then the number of parameters are wrong but, more concerning, the first parameter - i.e. should be the highest file descriptor for select to look at plus one - is zero. Depending on the implementation of select I wonder if that is in fact just turning select into a fancy sleep statement.
You should be calling recv() first and then call select() only if recv() tells you to do so. Don't call select() first, that is a waste of processing. recv() knows if data is immediately available or if it has to wait for data to arrive:
void *RecvFileThread(void *param){
int sockfd = (int) param; //blocking socket
set_sock_rcvbuf(sockfd, 1024 * 256); //set the send buffer to 256
char buffer[1024 * 256];
while (1){
int ret = 0;
int len = recv(sockfd, buffer, sizeof(buffer), 0);
if (len == -1) {
if (WSAGetLastError() != WSAEWOULDBLOCK) {
//handle error
break;
}
struct timeval timeout;
timeout.tv_sec = 1;
timeout.tv_usec = 0;
fd_set rds;
FD_ZERO(&rds);
FD_SET(sockfd, &rds)'
//actually, the first parameter of select() is
//ignored on windows, though on linux this parameter
//should be (maximum socket value + 1)
int ret = select(sockfd + 1, &rds, NULL, &timeout );
if (ret == -1) {
// handle error
break;
}
if (ret == 0) {
// log that timer expires
break;
}
// socket is readable so try read again
continue;
}
if (len == 0) {
// handle graceful disconnect
break;
}
//log the number of data it received
process_tcp_data(buffer, len);
}
}
Do something similar on the sending side as well. Call send() first, and then call select() waiting for writability only if send() tells you to do so.
I'm using the WSAEventSelect I/O model in Windows Sockets and now I want to know that how may I know that my send and receive operations have sent and received all of the data?
After I know that, how should I design a way so that it sends the data fully? Any examples would be really appreciated.
Here is the code (sample code from the book I'm learning from):
SOCKET SocketArray [WSA_MAXIMUM_WAIT_EVENTS];
WSAEVENT EventArray [WSA_MAXIMUM_WAIT_EVENTS],
NewEvent;
SOCKADDR_IN InternetAddr;
SOCKET Accept, Listen;
DWORD EventTotal = 0;
DWORD Index, i;
WSANETWORKEVENTS NetworkEvents;
// Set up socket for listening etc...
// ....
NewEvent = WSACreateEvent();
WSAEventSelect(Listen, NewEvent,
FD_ACCEPT │ FD_CLOSE);
listen(Listen, 5);
SocketArray[EventTotal] = Listen;
EventArray[EventTotal] = NewEvent;
EventTotal++;
while(TRUE)
{
// Wait for network events on all sockets
Index = WSAWaitForMultipleEvents(EventTotal,
EventArray, FALSE, WSA_INFINITE, FALSE);
Index = Index - WSA_WAIT_EVENT_0;
// Iterate through all events to see if more than one is signaled
for(i=Index; i < EventTotal ;i++
{
Index = WSAWaitForMultipleEvents(1, &EventArray[i], TRUE, 1000,
FALSE);
if ((Index == WSA_WAIT_FAILED) ││ (Index == WSA_WAIT_TIMEOUT))
continue;
else
{
Index = i;
WSAEnumNetworkEvents(
SocketArray[Index],
EventArray[Index],
&NetworkEvents);
// Check for FD_ACCEPT messages
if (NetworkEvents.lNetworkEvents & FD_ACCEPT)
{
if (NetworkEvents.iErrorCode[FD_ACCEPT_BIT] != 0)
{
printf("FD_ACCEPT failed with error %d\n",
NetworkEvents.iErrorCode[FD_ACCEPT_BIT]);
break;
}
// Accept a new connection, and add it to the
// socket and event lists
Accept = accept(
SocketArray[Index],
NULL, NULL);
NewEvent = WSACreateEvent();
WSAEventSelect(Accept, NewEvent,
FD_READ │ FD_CLOSE);
EventArray[EventTotal] = NewEvent;
SocketArray[EventTotal] = Accept;
EventTotal++;
printf("Socket %d connected\n", Accept);
}
// Process FD_READ notification
if (NetworkEvents.lNetworkEvents & FD_READ)
{
if (NetworkEvents.iErrorCode[FD_READ_BIT] != 0)
{
printf("FD_READ failed with error %d\n",
NetworkEvents.iErrorCode[FD_READ_BIT]);
break;
}
// Read data from the socket
recv(SocketArray[Index - WSA_WAIT_EVENT_0],
buffer, sizeof(buffer), 0);
// here I do some processing on the data received
DoSomething(buffer);
// now I want to send data
send(SocketArray[Index - WSA_WAIT_EVENT_0],
buffer, sizeof(buffer), 0);
// how can I be assured that the data is sent completely
}
// FD_CLOSE handling here
// ......
// ......
}
}
}
What I thought, that I would set a boolean flag to determine that the receive has completed (the message will have its length prefixed) and then start processing that data. But what about send()? Can you please tell me the possibilities.
**EDIT:**See the FD_READ event part
Unless the protocol (application layer) you are handling gives you any information about how many data you're about to receive, the only way to determine if there is nothing more to received is when the peer disconnects. If the server simply stop sending, you can't determine if its the end or its just busy. It ends when it ends. You also can't determine if the server disconnected because its the end or because the connection was broken.
Thats why most protocols inform the peer about how many bytes it is going to be sent before sending it, or by placing a boundary in the end of the data.
About sending, you must be aware of the buffer you're using. When you send(), it goes to a buffer (with 64KB by default). send() returns the number of bytes placed in the buffer, if its less then the bytes you were trying to send, you have to manage it to try again in the next time you receive a FD_WRITE event.
You can't have sure about how much data was already received by the peer unless it keeps you informed (mIRC DDC does that).
Not sure it clearfyed your doubts, hope it helped :)
When you are doing the recv, you need to save the return status to determine if the data was received. recv returns the number of bytes received, and I would use the flag MSG_WAITALL instead of zero for the fourth parameter to receive all of the message (based on the buffer size). If the status recv returns is negative, there was an error of some nature, such as connection was close from the other end or there was some other issue.
As for the send, you should save the return value as it also as the status, but in this case, there is not a flag to have all the data sent before returning. You will have to determine the amount send and adjust the buffer and send size based on the value. As with recv, a negative value indicates an error has occurred.
I would read the function descriptions on the Microsoft website for recv and send for more information on the return values and flags.