I seem to have an issue with increasing latency on my packet transmission with my TCP server. Now, this server has to be TCP, since UDP is blocked by firewalls (this is a client-server-client type of communication). I'm also aware that the sending of a struct with floating point integers as I am is extremely non-portable, however, this system will operate Windows client to Windows server to Windows client for the foreseeable future.
The issue is this: the client begins receiving the data properly from the other client, however, there is a delay which gets exponentially worse (where, by about 3 minutes in, the packets are nearly 30 seconds behind - but correct, when they DO arrive). I researched it and found an answer on a Microsoft page explaining it is due to full send buffers, however, their syntax for the setsockopt doesn't match the documented examples, so perhaps I'm wrong.
Anyway, any advice would be appreciated:
The relevant part of the server:
(When accept() is called:)
int buff_size = 2048000;
int nodel = 1;
setsockopt(sock, SOL_SOCKET, SO_SNDBUF, (char*)&buff_size, sizeof(int));
setsockopt(sock, SOL_SOCKET, SO_RCVBUF, (char*)&buff_size, sizeof(int));
setsockopt(sock, IPPROTO_TCP, TCP_NODELAY, (char*)&nodel, sizeof(nodel));
The message redirect loop:
if (gp->curr_pilot < sz && gp->users[gp->curr_pilot].pilot == TRUE) {
char* pbuf = new char[1024];
int recvd = recv(gp->users[gp->curr_pilot].sockfd_data, pbuf, 1024, NULL);
if (recvd > 0) {
for (int i = 0; i < sz; i++) {
if (i != gp->curr_pilot && gp->users[i].unioned == TRUE)
send(gp->users[i].sockfd_data, pbuf, recvd, NULL);
}
}
delete[] pbuf;
}
The client (master is set when it's sending, and it does get set properly by my code):
(data is my struct of doubles that gets written by the client, cdata is a copy of it that gets written into the client).
while (kill_dataproc == FALSE) {
if (master == TRUE) {
char* buff = new char[1024];
int packet_signer = 1192;
memcpy_s(buff, intsz, &packet_signer, intsz);
memcpy_s((void*)(buff + intsz), sz, data, sz);
send(server_sock, buff, buffsize, NULL);
delete[] buff;
}
else {
char* buffer = new char[1024];
int recvd = recv(server_sock, buffer, 1024, MSG_PEEK);
if (recvd > 0) {
int newpacketsigner = 0;
memcpy_s(&newpacketsigner, intsz, buffer, intsz);
if (newpacketsigner == 1192) {
if (recvd >= buffsize) {
char* nbuf = new char[buffsize];
int recvd2 = recv(server_sock, nbuf, buffsize, NULL);
int err = WSAGetLastError();
memcpy_s(&newpacketsigner, intsz, nbuf, intsz);
memcpy_s(cdata, sz, (void*)(nbuf + intsz), sz);
//do things w/ the struct
delete[] nbuf;
}
}
else
recv(server_sock, buffer, 1024, NULL);
}
delete[] buffer;
}
Sleep(10);
}
As well, identical calls to setsockopt and are called for the client's sockets, and all of the sockets, server and client, are nonblocking.
You're assuming that your reads are filling the buffer. They are only obliged to transfer at least one byte. You you need to loop.
So, you have unread data backing up and stalling the sender.
NB Those receive buffers are greater than 64k and so may be inoperative unless they are set before the socket is connected. In the case of the server you need to set the receive buffer size on the listening socket. Accepted sockets will inherit it. If you don't to it his way, window scaling won't be in effect so a window > 64k cannot be advertised (unless the platform has window scaling on by default).
Related
I'm working on a webserver framework in C++ mostly for my own understanding, but I want to optimize it as well.
My question is is it faster to write multiple char arrays to the TCP connection for every html response or to spend the time to concatenate up front and only write to the TCP connection once. I was thinking about benchmarking it, but I am not quite sure how to go about it.
This is my first post on stackoverflow, although I have benefitted from the website very often!
Thanks!
Here is what I am talking about for sending many char arrays individually. The alternate would be concatenate all of these char arrays into one char array then sending that.
int main() {
sockaddr_in address;
int server_handle;
int addrlen = sizeof(address);
if ((server_handle = socket(AF_INET, SOCK_STREAM, 0)) < 0)
{
perror("cannot create socket");
exit(0);
}
memset((char *) &address, 0, sizeof(address));
address.sin_family = AF_INET;
address.sin_addr.s_addr = htonl(INADDR_ANY);
address.sin_port = htons(PORT);
if (bind(server_handle, (sockaddr *) &address, (socklen_t) addrlen) < 0)
{
perror("bind failed");
exit(0);
}
if (listen(server_handle, 3) < 0)
{
perror("In listen");
exit(EXIT_FAILURE);
}
while(1) {
std::cout << "\n+++++++ Waiting for new connection ++++++++\n\n";
int client_handle;
if ((client_handle = accept(server_handle, (struct sockaddr *)&address, (socklen_t *) &addrlen))<0)
{
perror("In accept");
exit(EXIT_FAILURE);
}
// read and respond to client request
char buffer[30000] = {0};
int bytesRead = read(client_handle, buffer, 30000);
char * httptype = "HTTP/1.1 ";
char * status = "200 \n";
char * contenttype = "Content-Type: text/html \n";
char * contentlength = "Content-Length: 21\n\n";
char * body = "<h1>hello world!</h1>";
write(client_handle, httptype, 9);
write(client_handle, status, 5);
write(client_handle, contenttype, 26);
write(client_handle, contentlength, 20);
write(client_handle, body, 21);
std::cout << "------------------Response sent-------------------\n";
close(client_handle);
}
}
If you want to send multiple buffers with a single write call you can use vectored IO (aka scatter/gather IO) as the manual suggests:
char *str0 = "hello ";
char *str1 = "world\n";
struct iovec iov[2];
ssize_t nwritten;
iov[0].iov_base = str0;
iov[0].iov_len = strlen(str0);
iov[1].iov_base = str1;
iov[1].iov_len = strlen(str1);
nwritten = writev(STDOUT_FILENO, iov, 2);
In fact it writing to a socket is not really different from writing to a file descriptor. And the fwrite function was introduced to the C library for a reason: write (be it to a TCP connection or to a file descriptor) involve a system call on common OS and a context change user/kernel. That context change has some overhead, mainly if you write small chunks of data.
On the other hand, if you write larger chunks of data in sizes that are close to the physical size for the underlying system call (disk buffer for a file descriptor, or max packet size for a network socket), the fwrite call or in your example the code concatenating char arrays will not really lower the system overhead and will just add some user code processing.
TL/DR: this depends on the average size of what you write. The smaller it is, the higher benefit of concatenating the date in larger chunks before writing. And remember: this is a low level optimization that should only be considered if you have identified a performance bottleneck or if the code could be used in a broadly distributed library.
A simple summary:
boost asio server, send a video frame 720x768x3 with simple compression
packet size is 186476, not really to much
nothing to complicated, anyway, if i test it in the hololens emulator or on the physical device
// uint32_t data_length == size of frame 'data_ptr'
enum max_length = sizeof(uint32_t);
memcpy(data_, &data_length, max_length);
auto length = boost::asio::write(*socket_, boost::asio::buffer(data_, max_length), e);
length = boost::asio::write(*socket_, boost::asio::buffer(data_ptr, data_length), e);
// receive
char data_[max_length] = { 0 };
fd_set readSet;
FD_ZERO(&readSet);
FD_SET(_socket, &readSet);
timeval timeout;
timeout.tv_sec = 0; // Zero timeout (poll)
timeout.tv_usec = 0;
auto result = select(_socket, &readSet, nullptr, nullptr, &timeout);
if (result == 0)
continue;
result = recv(_socket, data_, max_length, 0);
if (result == SOCKET_ERROR) {
closesocket(_socket);
_socket = INVALID_SOCKET;
break;
}
uint32_t msg_size(0);
memcpy(&msg_size, data_, max_length);
std::vector<char> vec(msg_size);
result = recv(_socket, &vec[0], msg_size, 0);
while (result < msg_size) {
result += recv(_socket, &vec[result], msg_size - result, 0);
}
but the hololens can't receive the full packet, i try it also with the .net streamsockets, same result. it tried a few times and then recv blocks in the while loop and doesn't receive anymore.
anyone, any idea? is it an uwp app problem, that i can't receive 'bigger' packets, or get it killed because it takes too long?
You have two main problems:
First, you need to check the return value of recv for errors. If it returns 0 or -1, you need to handle that.
Second, you ignore all the data you received from your first call to recv. You set msg_size to zero when it should be result minus however many bytes the length took.
I would suggest writing a function that reads exactly the specified number of bytes, checking for errors. Call it first to receive four bytes and check if it returned an error. Then call it to receive the number of bytes indicated by the length data you received.
Smaller problems include:
What if the first recv only returns one byte?
What if the way your platform stores 32-bit integers isn't the same as the way the emulator sends it?
I can't send too large data packets over my setup (currently sending to 127.0.0.1), at about 30kB this functionality starts to fail. For testing I have an application that just starts a Receiver and a Sender, starts two threads, one for the sending, one for receiving, and when both have finished, compares if the sending string is the same as the received string.
void SenderThread(int count)
{
messageOut = "";
messageOut.append(count, 'A');
sender->sendData(messageOut);
}
void ReceivingThread()
{
receiver->ReceiveData(message);
}
main()
{
receiver = new utility::Receiver();
sender = new utility::Sender();
receiver->startSocket(9000);
sender->connectToSocket("127.0.0.1", 9000);
receiver->accept();
for (int count = 100; count < 1024 * 1024; count += 100)
{
std::thread sendThread(SenderThread, count);
std::thread recvThread(ReceivingThread);
sendThread.join();
recvThread.join();
printf("Sent data of length %d ", messageOut.length());
if (message == messageOut)
printf("successfully.\n");
else
{
printf("not successfully.\n");
printf("Length of original message: %d, Length of received message: %d.\n", messageOut.length(), message.length());
break;
}
}
delete receiver;
delete sender;
}
I have following code for my sending socket:
bool utility::Sender::sendData(const std::string & message)
{
int numBytes = 0;
int totalSent = 0;
// Break condition: send() fails, or whole message was transfered
while (totalSent < message.length() && send(message.substr(totalSent).c_str(), message.length() - totalSent, numBytes))
{
totalSent += numBytes;
}
return false;
}
bool utility::Sender::send(const char* pBuffer, int32_t lengthOfBuffer, int32_t &numBytes)
{
numBytes = ::send(connectSocket, pBuffer, lengthOfBuffer, 0);
if (numBytes == SOCKET_ERROR)
return false;
return true;
}
The receiving side:
bool utility::Receiver::ReceiveData(std::string& message)
{
int32_t numBytes = 0;
char data[defaultBufferLength];
// Set to blocking for the first data package
u_long iMode = 0;
ioctlsocket(tcpSocket, FIONBIO, &iMode);
bool success = receive(data, defaultBufferLength, numBytes);
message = std::string(data, numBytes);
// Set to non-blocking for the rest of the journey
iMode = 1;
ioctlsocket(tcpSocket, FIONBIO, &iMode);
while (numBytes == defaultBufferLength && receive(data, defaultBufferLength, numBytes))
{
message.append(data, numBytes);
}
return success;
}
bool utility::Receiver::receive(char* pBuffer, int32_t lengthOfBuffer, int32_t& numBytes)
{
int32_t flags = 0;
numBytes = recv(tcpSocket, pBuffer, lengthOfBuffer, flags);
if (numBytes == -1)
{
numBytes = 0;
if (errno == EAGAIN || errno == EWOULDBLOCK)
return false;
else
close();
}
return true;
}
The output I am getting is
Sent data of length 39200 successfully.
Sent data of length 39300 successfully.
Sent data of length 39400 successfully.
Sent data of length 39500 successfully.
Sent data of length 39600 successfully.
Sent data of length 39700 successfully.
Sent data of length 39800 successfully.
Sent data of length 39900 successfully.
Sent data of length 40000 successfully.
Sent data of length 40100 successfully.
Sent data of length 40200 successfully.
Sent data of length 40300 successfully.
Sent data of length 40400 successfully.
Sent data of length 40500 not successfully.
Length of original message: 40500, Length of received message: 29200.
The thing which is the most irritating, and probably the cause of this, is the ::send(...). I can give it 2 MB of char*, and it will just send it in one swoop (but the receiver fails miserably). What can I do about that?
TCP is a byte-oriented protocol, not message oriented.
send does not create a message. recv does not receive a message. They work on blocks of bytes, and multiple send calls can be combined at the network layer (for efficiency) or broken into multiple TCP packets. In practice, even if you turn off Nagle's algorithm, if a frame is lost at the physical layer and TCP has to retry the transmission, the retransmit will include as much data added to the buffer afterward as it can fit in an outgoing datagram.
So you can't rely on any particular mapping between send calls and recv calls. The only guarantee is that the bytes are delivered to your socket in the same order they were sent. If boundaries are important, you have to create them yourself. Length prefixes are popular in combination with TCP, special framing sequences less so.
You do already have a loop for reassembling messages... but you break out of the loop when you see EAGAIN / EWOULDBLOCK or a partly filled buffer, and continue processing. That's a problem, because you only have a partial message at that point. You need a way to delay processing until you have a complete message.
Adding to ben-voigt answer, you need to create an higher level message system for your socket, so you can send to the server the message size and on your socket receive method create a session or buffer storage that you append the received data till the message size match the total data received, once that requirement is met then you can process the data
I am using the following function to receive XML files for a while, but it has been going wrong for some time now and I think the problem is on the customer's network. I'm not sure, it's just a guess.
It happens some times when they try to send me XMLs files bigger than 13KB - the received buffer contains trash like this:
...
<Identifiers>
<Identifier>
<PID>E3744</PID>
</Identifier>
<Identifier IDType="SHC">
<PID>10021020</PID>
</Identifier>
<Identifier><*X| Å Å Ÿòc PV“R¢ E ·Â÷# #€ˆ
þõ
øæ=Ì×KåÅôdËÞ¦P s÷j
<PID>1002102-0</PID>
</Identifier>
<Identifier>
<PID>1002102</PID>
</Identifier>
</Identifiers>
...
Here is the fuction:
bool ReceiveBuffer(HWND hDlg, const SOCKET& socket, string& sBuffer)
{
WSAAsyncSelect(socket, hDlg, WM_WINSOCK, FD_CLOSE);
int iBufSize = 10000000; //10MB
int iBufVarSize = sizeof(iBufSize);
if (setsockopt(socket, SOL_SOCKET, SO_RCVBUF, (char*)&iBufSize, iBufVarSize) == SOCKET_ERROR)
if (getsockopt(socket, SOL_SOCKET, SO_RCVBUF, (char*)&iBufSize, &iBufVarSize) == SOCKET_ERROR)
WriteLog("Unable to GET buffer receiving size");
char* buf = (char*)MALLOCZ(iBufSize);
if (!buf)
{
WriteLog("Unable to allocate memory");
return false;
}
int iCharsRead = 0;
do
{
memset(buf, 0, iBufSize);
iCharsRead = recv(socket, buf, iBufSize, 0);
if (iCharsRead > 0)
sBuffer.append(buf, iCharsRead);
}
while (iCharsRead > 0);
FREE(buf);
buf = NULL;
return true;
}
ReceiveBuffer() should not be calling WSAAsyncSelect() or setting SO_RCVBUF. That is the responsibility of whatever code initially creates the SOCKET.
But more importantly, WSAAsyncSelect() puts the socket into non-blocking mode, per the documentation:
The WSAAsyncSelect function automatically sets socket s to nonblocking mode, regardless of the value of lEvent.
However, your reading loop is not accounting for possible WSAEWOULDBLOCK errors from recv() so it can call recv() again to keep reading.
ReceiveBuffer() is also assuming that if setsockopt() succeeds then the actual buffer size is really the requested size, which is not guaranteed. So you need to call getsockopt() regardless of whether setsockopt() succeeds or fails, per the documentation:
SO_RCVBUF and SO_SNDBUF
When a Windows Sockets implementation supports the SO_RCVBUF and SO_SNDBUF options, an application can request different buffer sizes (larger or smaller). The call to setsockopt can succeed even when the implementation did not provide the whole amount requested. An application must call getsockopt with the same option to check the buffer size actually provided.
But really, setting SO_RCVBUF on every call to ReceiveBuffer() is not necessary in the first place. recv() returns whatever data is currently available at that moment, up to the requested buffer size. It is very unlikely that it will return anywhere close to 10MB of data on any given read. So you are just wasting a lot of memory for no real benefit. It is one thing to set the socket's internal buffer to 10MB if you are on a fast network. It is another thing to allocate a memory buffer of 10MB to receive data from each recv() call. You should use a much smaller memory buffer. 1K is a common size to use.
But beyond that, regardless of the buffer size you use, ReceiveBuffer() is reading arbitrary bytes in an endless loop until the socket is disconnected or errors (and not accounting for non-blocking errors). When the socket does eventually disconnect/error, ReceiveBuffer() is returning true instead of false, so the caller has no idea that something went wrong, or that sBuffer may be incomplete.
Also, in case the caller calls ReceiveBuffer() multiple times with the same variable for the sBuffer parameter, you should call sBuffer.clear() before starting the reading loop to make sure you are not appending new data to the end of stale data.
Now, all of the above is just technical issues with your code logic. But there is also a semantic element as well. XML has a finite length to it, but your current code has no way of knowing what that length actually is. It is the sender's responsibility to tell the receiver when the XML has stopped being sent. That could be by sending the XML's length before sending the XML itself, so the receiver knows how many bytes to expect. Or that could be by sending a unique delimiter, like a null terminator, at the end of the XML, so the receiver can stop reading when it sees the delimiter. Or that could be by gracefully closing the connection at the end of the XML (which is a bad idea, because then the receiver can't differentiate between end-of-data and data loss). But it has to do something.
Now, with all of that said, try something more like this instead (I'm assuming a graceful disconnect is the end-of-data indicator, since that is what your original code is doing - you need to seriously consider a different protocol design!):
bool ReceiveBuffer(SOCKET socket, string& sBuffer)
{
sBuffer.clear();
/*
int iBufSize = 1024 * 1024 * 10; //10MB
setsockopt(socket, SOL_SOCKET, SO_RCVBUF, (char*)&iBufSize, sizeof(iBufSize));
if (getsockopt(socket, SOL_SOCKET, SO_RCVBUF, (char*)&iBufSize, sizeof(iBufSize)) == SOCKET_ERROR)
WriteLog("Unable to GET buffer receiving size");
*/
char* buf = (char*) malloc(1024);
if (!buf)
{
WriteLog("Unable to allocate memory");
return false;
}
int iCharsRead;
bool bRet = true;
do
{
iCharsRead = recv(socket, buf, 1024, 0);
if (iCharsRead > 0)
{
sBuffer.append(buf, iCharsRead);
}
else if (iCharsRead == 0)
{
// socket disconnected gracefully
break;
}
else
{
if (WSAGetLastError() != WSAEWOULDBLOCK)
{
// socket error!
WriteLog("Unable to read from socket");
bRet = false;
break;
}
// socket is non-blocking and there is no data available
// at this moment. Call recv() again...
// optional: call select() to wait for new data to arrive
// before calling recv() again. For instance, this will
// allow you to fail the function if no new data arrived
// within a timeout period...
//
/*
fd_set fd;
FD_ZERO(&fd);
FD_SET(socket, &fd);
timeval tv;
tv.tv_sec = 30;
tv.tv_usec = 0;
int ret = select(0, &fd, NULL, NULL, &tv);
if (ret <= 0)
{
if (ret == 0)
{
// timeout!
WriteLog("Timeout waiting for data from socket");
}
else
{
// socket error!
WriteLog("Unable to wait for data from socket");
}
bRet = false;
break;
}
*/
}
}
while (true);
free(buf);
return bRet;
}
I have written simple C/S applications to test the characteristics of non-blocking sockets, here is some brief information about the server and client:
//On linux The server thread will send
//a file to the client using non-blocking socket
void *SendFileThread(void *param){
CFile* theFile = (CFile*) param;
int sockfd = theFile->GetSocket();
set_non_blocking(sockfd);
set_sock_sndbuf(sockfd, 1024 * 64); //set the send buffer to 64K
//get the total packets count of target file
int PacketCOunt = theFile->GetFilePacketsCount();
int CurrPacket = 0;
while (CurrPacket < PacketCount){
char buffer[512];
int len = 0;
//get packet data by packet no.
GetPacketData(currPacket, buffer, len);
//send_non_blocking_sock_data will loop and send
//data into buffer of sockfd until there is error
int ret = send_non_blocking_sock_data(sockfd, buffer, len);
if (ret < 0 && errno == EAGAIN){
continue;
} else if (ret < 0 || ret == 0 ){
break;
} else {
currPacket++;
}
......
}
}
//On windows, the client thread will do something like below
//to receive the file data sent by the server via block socket
void *RecvFileThread(void *param){
int sockfd = (int) param; //blocking socket
set_sock_rcvbuf(sockfd, 1024 * 256); //set the send buffer to 256
while (1){
struct timeval timeout;
timeout.tv_sec = 1;
timeout.tv_usec = 0;
fd_set rds;
FD_ZERO(&rds);
FD_SET(sockfd, &rds)'
//actually, the first parameter of select() is
//ignored on windows, though on linux this parameter
//should be (maximum socket value + 1)
int ret = select(sockfd + 1, &rds, NULL, NULL, &timeout );
if (ret == 0){
// log that timer expires
CLogger::log("RecvFileThread---Calling select() timeouts\n");
} else if (ret) {
//log the number of data it received
int ret = 0;
char buffer[1024 * 256];
int len = recv(sockfd, buffer, sizeof(buffer), 0);
// handle error
process_tcp_data(buffer, len);
} else {
//handle and break;
break;
}
}
}
What surprised me is that the server thread fails frequently because of socket buffer full, e.g. to send a file of 14M size it reports 50000 failures with errno = EAGAIN. However, via logging I observed there are tens of timeouts during the transfer, the flow is like below:
on the Nth loop, select() succeeds and read 256K's data successfully.
on the (N+1)th loop, select() failed with timeout.
on the (N+2)th loop, select() succeeds and read 256K's data successfully.
Why there would be timeouts interleaved during the receving? Can anyone explain this phenomenon?
[UPDATE]
1. Uploading a file of 14M to the server only takes 8 seconds
2. Using the same file with 1), the server takes nearly 30 seconds to send all data to the client.
3. All sockets used by the client are blocking. All sockets used by the server are non-blocking.
Regarding #2, I think timeouts are the reason why #2 takes much more time then #1, and I wonder why there would be so many timeouts when the client is busy in receiving data.
[UPDATE2]
Thanks for comments from #Duck, #ebrob, #EJP, #ja_mesa , I will do more investigation today
then update this post.
Regarding why I send 512 bytes per loop in the server thread, it is because I found the server thread sends data much faster than the client thread receiving them. I am very confused that why timeout happened to the client thread.
Consider this more of a long comment than an answer but as several people have noted the network is orders of magnitude slower than your processor. The point of non-blocking i/o is that the difference is so great that you can actually use it to do real work rather than blocking. Here you are just pounding on the elevator button hoping that makes a difference.
I'm not sure how much of your code is real and how much is chopped up for posting but in the server you don't account for (ret == 0) i.e. normal shutdown by the peer.
The select in the client is wrong. Again, not sure if that was sloppy editing or not but if not then the number of parameters are wrong but, more concerning, the first parameter - i.e. should be the highest file descriptor for select to look at plus one - is zero. Depending on the implementation of select I wonder if that is in fact just turning select into a fancy sleep statement.
You should be calling recv() first and then call select() only if recv() tells you to do so. Don't call select() first, that is a waste of processing. recv() knows if data is immediately available or if it has to wait for data to arrive:
void *RecvFileThread(void *param){
int sockfd = (int) param; //blocking socket
set_sock_rcvbuf(sockfd, 1024 * 256); //set the send buffer to 256
char buffer[1024 * 256];
while (1){
int ret = 0;
int len = recv(sockfd, buffer, sizeof(buffer), 0);
if (len == -1) {
if (WSAGetLastError() != WSAEWOULDBLOCK) {
//handle error
break;
}
struct timeval timeout;
timeout.tv_sec = 1;
timeout.tv_usec = 0;
fd_set rds;
FD_ZERO(&rds);
FD_SET(sockfd, &rds)'
//actually, the first parameter of select() is
//ignored on windows, though on linux this parameter
//should be (maximum socket value + 1)
int ret = select(sockfd + 1, &rds, NULL, &timeout );
if (ret == -1) {
// handle error
break;
}
if (ret == 0) {
// log that timer expires
break;
}
// socket is readable so try read again
continue;
}
if (len == 0) {
// handle graceful disconnect
break;
}
//log the number of data it received
process_tcp_data(buffer, len);
}
}
Do something similar on the sending side as well. Call send() first, and then call select() waiting for writability only if send() tells you to do so.