ioctlsocket or recv takes more time to execute in windows socket programming? - c++

In socket programming, some data is sent to the server, and as soon as server receives it sends the acknowledgement response message. it is more than 1 byte, so i check for more than one byte check while receiving, here i am losing around 120-200ms. Which is a very big issue. As client need to send ack back for this acknowledgement. I have sniffed to see data is arrived to my IP at the same time when server has sent. but recv or ioctlsocket(to check more than 1 byte is ready to be read) takes time to read more than one byte. How can i resolve this. The code is as follows.
DWORD RecvCount = 0;
char szBuff1[2048];
bool stop = false;
while(!stop)
{
ioctlsocket(*socket, FIONREAD, &RecvCount);
if(RecvCount > 1)
stop = true;
}
int Res = recv(*socket, szBuff1, RecvCount,0);

You should disable the Nagle algorithm on windows as otherwise the socket will sit on your data until the buffer is full (or at least wait a couple of hundred milliseconds before sending it anyway).
You do this by setting the TCP_NODELAY socket option:
int flag = 1;
int result = setsockopt(m_Socket,IPPROTO_TCP,TCP_NODELAY,(char *) &flag,sizeof(int));

Related

C++ tcp socket connection retry method

After developing a sample client server application which can exchange some data, I'm trying to implement the retry mechanism into it. Currently my application is following below protocol:
Client connects to server (non blocking mode) with 3 secs timeout and with 2 reties.
Start sending data from client with fixed length. Send has some error checking whether it is sending the complete data or not.
Receive response (timeout: 3secs) from server and verify that. If incorrect response received, re-send the data and wait for response. Repeat this for two times if failed.
For the above implementation code sections look likes something below:
connect() and select() for opening connection
select() and send() for data send
select() and recv() for data receiving
Now I'm making the retries based on return types of the socket functions, and if send() or recv() fails I'm retring the same methods. But not recalling connect().
I tested the thing by restarting the server in between the data transfer, and as a result client fails to communicate with the server and it quits after several retries, I believe this is happening as because there is no connect() call on retry methods.
Any suggestions?
Example code for receiving socket data
bool CTCPCommunication::ReceiveSocketData(char* pchBuff, int iBuffLen)
{
bool bReturn = true;
//check whether the socket is ready to receive
fd_set stRead;
FD_ZERO(&stRead);
FD_SET(m_hSocket, &stRead);
int iRet = select(0, &stRead, NULL, NULL, &m_stTimeout);
//if socket is not ready this line will be hit after 3 sec timeout and go to the end
//if it is ready control will go inside the read loop and reads data until data ends or
//socket error is getting triggered continuously for more than 3 secs.
if ((iRet > 0) && (FD_ISSET(m_hSocket, &stRead)))
{
DWORD dwStartTime = GetTickCount();
DWORD dwCurrentTime = 0;
while ((iBuffLen-1) > 0)
{
int iRcvLen = recv(m_hSocket, pchBuff, iBuffLen-1, 0);
dwCurrentTime = GetTickCount();
//receive failed due to socket error
if (iRcvLen == SOCKET_ERROR)
{
if((dwCurrentTime - dwStartTime) >= SOCK_TIMEOUT_SECONDS * 1000)
{
WRITELOG("Call to socket API 'recv' failed after 3 secs continuous retries, error: %d", WSAGetLastError());
bReturn = false;
break;
}
}
//connection closed by remote host
else if (iRcvLen == 0)
{
WRITELOG("recv() returned zero - time to do something: %d", WSAGetLastError());
break;
}
pchBuff += iRcvLen;
iBuffLen -= iRcvLen;
}
}
else
{
WRITELOG("Call to API 'select' failed inside 'ReceiveSocketData', error: %d", WSAGetLastError());
bReturn = false;
}
return bReturn;
}
Currently my application is following below protocol:
Client connects to server (non blocking mode) with 3 secs timeout and with 2 retries.
You can't retry a connection. You have to close the socket whose connect attempt failed, create a new socket, and call connect() again.
Start sending data from client with fixed length. Send has some error checking whether it is sending the complete data or not.
This isn't necessary in blocking mode: the POSIX standard guarantees that a blocking-mode send() will send all the data, or fail with an error.
Receive response (timeout: 3secs) from server and verify that. If incorrect response received, re-send the data and wait for response. Repeat this for two times if failed.
This is a bad idea. Most probably all the data willl arrive including all the retries, or none of it. You need to make sure that your transactions are idempotent if you use this technique. You also need to pay close attention to the actual timeout period. 3 seconds is not adequate in general. A starting point is double the expected service time.
For the above implementation code sections look likes something below:
connect() and select() for opening connection
select() and send() for data send
select() and recv() for data receiving
You don't need the select() in blocking mode. You can just set a read timeout with SO_RCVTIMEO.
Now I'm making the retries based on return types of the socket functions, and if send() or recv() fails I'm retrying the same methods. But not recalling connect().
I tested the thing by restarting the server in between the data transfer, and as a result client fails to communicate with the server and it quits after several retries, I believe this is happening as because there is no connect() call on retry methods.
If that was true you would get an error that said so.

Does a blocking send() returns immediately?

I thought that calling send() on a blocking socket does not return until all data are sent (until the last chunk of data is sent to the send buffer that is), however the following test showed otherwise:
// buffer = "AAAAAAAA...B" (10 MB)
char *buffer = new char[10485760];
memset(buffer, 0x41, 10485760);
buffer[10485758] = 0x42;
buffer[10485759] = '\0';
// Send buffer
send(s, buffer, 10485760, 0) ;
printf("send() has returned");
So basically I connected to Netcat and sent buffer, and even after send() has returned, AAAAAAAAAAAAAA... was still being displayed to the console on the other end. You can close the sender at any moment and the sending would stop (so it is not that buffer has already arrived to the other end but it takes a long time to display it to the console).
This can only make sense if the send buffer is 10+ MB.
Edit: the return value of send() is 10485760 (i.e. buffer size).
send sends the data to the kernel, where it is placed in a socket buffer. If the kernel runs out of socket buffers, the send will block (or fail, if it is non-blocking).
That has very little to do with the kernel sending data to the network.
However, if you kill a program, all of its sockets are forcibly closed, which will discard any unsent data sitting in kernel buffers.

Set connect timout using setsockopt in Linux

I am writing a linux Qt5/c++ app that tries to connect to a peer using a QTcpSocket. I call
tcpsocket->connectToHost(address,port,options)
When the peer is available it works great and connects immediately. However, when the peer is not available: The first time I call the above, the connect waits 1 minute before I receive a SocketTimeoutError (5). Then, every subsequent call to connect might wait a second before I receive a ConnectionRefusedError (0), or might wait a full minute (depending on the system tested).
Is there a setsockopt I can use to reduce the time waiting for initial connect?
I should point out that I already set some socket options in order to quickly notify me of a lost connection (see below). Hopefully these aren't causing the 1 minute initial connection error delay:
int enableKeepAlive = 1;
setsockopt(fd, SOL_SOCKET, SO_KEEPALIVE, &enableKeepAlive, sizeof(enableKeepAlive));
int maxIdle = 5; /* seconds */
setsockopt(fd, IPPROTO_TCP, TCP_KEEPIDLE, &maxIdle, sizeof(maxIdle));
int count = 3; // send up to 3 keepalive packets out, then disconnect if no response
setsockopt(fd, SOL_TCP, TCP_KEEPCNT, &count, sizeof(count));
int interval = 2; // send a keepalive packet out every 2 seconds (after the 5 second idle period)
setsockopt(fd, SOL_TCP, TCP_KEEPINTVL, &interval, sizeof(interval));
Rather than rely on setsockopt(), why don't you instead set your socket to non-blocking mode and perform an asynchronous connect(). You'd then block on select(), poll() or whatever event demultiplexing mechanism you are using, setting the timeout to whatever you desire. Once it becomes writable you know the connection is complete.

TCP connection accepted, but writing data causes it to use a stale connection

The server (192.168.1.5:3001), is running Linux 3.2, and is designed to only accept one connection at a time.
The client (192.168.1.18), is running Windows 7. The connection is a wireless connection. Both programs are written in C++.
It works great 9 in 10 connect/disconnect cycles. The tenth-ish (randomly happens) connection has the server accept the connection, then when it later actually writes to it (typically 30+s later), according to Wireshark (see screenshot) it looks like it's writing to an old stale connection, with a port number that the client has FINed (a while ago), but the server hasn't yet FINed. So the client and server connections seems to get out of sync - the client makes new connections, and the server tries writing to the previous one. Every subsequent connection attempt fails once it gets in this broken state. The broken state can be initiated by going beyond the maximum wireless range for a half a minute (as before 9 in 10 cases this works, but it sometimes causes the broken state).
Wireshark screenshot behind link
The red arrows in the screenshot indicate when the server started sending data (Len != 0), which is the point when the client rejects it and sends a RST to the server. The coloured dots down the right edge indicate a single colour for each of the client port numbers used. Note how one or two dots appear well after the rest of the dots of that colour were (and note the time column).
The problem looks like it's on the server's end, since if you kill the server process and restart, it resolves itself (until next time it occurs).
The code is hopefully not too out-of-the-ordinary. I set the queue size parameter in listen() to 0, which I think means it only allows one current connection and no pending connections (I tried 1 instead, but the problem was still there). None of the errors appear as trace prints where "// error" is shown in the code.
// Server code
mySocket = ::socket(AF_INET, SOCK_STREAM, 0);
if (mySocket == -1)
{
// error
}
// Set non-blocking
const int saveFlags = ::fcntl(mySocket, F_GETFL, 0);
::fcntl(mySocket, F_SETFL, saveFlags | O_NONBLOCK);
// Bind to port
// Union to work around pointer aliasing issues.
union SocketAddress
{
sockaddr myBase;
sockaddr_in myIn4;
};
SocketAddress address;
::memset(reinterpret_cast<Tbyte*>(&address), 0, sizeof(address));
address.myIn4.sin_family = AF_INET;
address.myIn4.sin_port = htons(Port);
address.myIn4.sin_addr.s_addr = INADDR_ANY;
if (::bind(mySocket, &address.myBase, sizeof(address)) != 0)
{
// error
}
if (::listen(mySocket, 0) != 0)
{
// error
}
// main loop
{
...
// Wait for a connection.
fd_set readSet;
FD_ZERO(&readSet);
FD_SET(mySocket, &readSet);
const int aResult = ::select(getdtablesize(), &readSet, NULL, NULL, NULL);
if (aResult != 1)
{
continue;
}
// A connection is definitely waiting.
const int fileDescriptor = ::accept(mySocket, NULL, NULL);
if (fileDescriptor == -1)
{
// error
}
// Set non-blocking
const int saveFlags = ::fcntl(fileDescriptor, F_GETFL, 0);
::fcntl(fileDescriptor, F_SETFL, saveFlags | O_NONBLOCK);
...
// Do other things for 30+ seconds.
...
const int bytesWritten = ::write(fileDescriptor, buffer, bufferSize);
if (bytesWritten < 0)
{
// THIS FAILS!! (but succeeds the first ~9 times)
}
// Finished with the connection.
::shutdown(fileDescriptor, SHUT_RDWR);
while (::close(fileDescriptor) == -1)
{
switch(errno)
{
case EINTR:
// Break from the switch statement. Continue in the loop.
break;
case EIO:
case EBADF:
default:
// error
return;
}
}
}
So somewhere between the accept() call (assuming that is exactly the point when the SYN packet is sent), and the write() call, the client's port gets changed to the previously-used client port.
So the question is: how can it be that the server accepts a connection (and thus opens a file descriptor), and then sends data through a previous (now stale and dead) connection/file descriptor? Does it need some sort of option in a system call that's missing?
I'm submitting an answer to summarize what we've figured out in the comments, even though it's not a finished answer yet. It does cover the important points, I think.
You have a server that handles clients one at a time. It accepts a connection, prepares some data for the client, writes the data, and closes the connection. The trouble is that the preparing-the-data step sometimes takes longer than the client is willing to wait. While the server is busy preparing the data, the client gives up.
On the client side, when the socket is closed, a FIN is sent notifying the server that the client has no more data to send. The client's socket now goes into FIN_WAIT1 state.
The server receives the FIN and replies with an ACK. (ACKs are done by the kernel without any help from the userspace process.) The server socket goes into the CLOSE_WAIT state. The socket is now readable, but the server process doesn't notice because it's busy with its data-preparation phase.
The client receives the ACK of the FIN and goes into FIN_WAIT2 state. I don't know what's happening in userspace on the client since you haven't shown the client code, but I don't think it matters.
The server process is still preparing data for a client that has hung up. It's oblivious to everything else. Meanwhile, another client connects. The kernel completes the handshake. This new client will not be getting any attention from the server process for a while, but at the kernel level the second connection is now ESTABLISHED on both ends.
Eventually, the server's data preparation (for the first client) is complete. It attempts to write(). The server's kernel doesn't know that the first client is no longer willing to receive data because TCP doesn't communicate that information! So the write succeeds and the data is sent out (packet 10711 in your wireshark listing).
The client gets this packet and its kernel replies with RST because it knows what the server didn't know: the client socket has already been shut down for both reading and writing, probably closed, and maybe forgotten already.
In the wireshark trace it appears that the server only wanted to send 15 bytes of data to the client, so it probably completed the write() successfully. But the RST arrived quickly, before the server got a chance to do its shutdown() and close() which would have sent a FIN. Once the RST is received, the server won't send any more packets on that socket. The shutdown() and close() are now executed, but don't have any on-the-wire effect.
Now the server is finally ready to accept() the next client. It begins another slow preparation step, and it's falling further behind schedule because the second client has been waiting a while already. The problem will keep getting worse until the rate of client connections slows down to something the server can handle.
The fix will have to be for you to make the server process notice when a client hangs up during the preparation step, and immediately close the socket and move on to the next client. How you will do it depends on what the data preparation code actually looks like. If it's just a big CPU-bound loop, you have to find some place to insert a periodic check of the socket. Or create a child process to do the data preparation and writing, while the parent process just watches the socket - and if the client hangs up before the child exits, kill the child process. Other solutions are possible (like F_SETOWN to have a signal sent to the process when something happens on the socket).
Aha, success! It turns out the server was receiving the client's SYN, and the server's kernel was automatically completing the connection with another SYN, before the accept() had been called. So there definitely a listening queue, and having two connections waiting on the queue was half of the cause.
The other half of the cause was to do with information which was omitted from the question (I thought it was irrelevant because of the false assumption above). There was a primary connection port (call it A), and the secondary, troublesome connection port which this question is all about (call it B). The proper connection order is A establishes a connection (A1), then B attempts to establish a connection (which would become B1)... within a time frame of 200ms (I already doubled the timeout from 100ms which was written ages ago, so I thought I was being generous!). If it doesn't get a B connection within 200ms, then it drops A1. So then B1 establishes a connection with the server's kernel, waiting to be accepted. It only gets accepted on the next connection cycle when A2 establishes a connection, and the client also sends a B2 connection. The server accepts the A2 connection, then gets the first connection on the B queue, which is B1 (hasn't been accepted yet - the queue looked like B1, B2). That is why the server didn't send a FIN for B1 when the client had disconnected B1. So the two connections the server has are A2 and B1, which are obviously out of sync. It tries writing to B1, which is a dead connection, so it drops A2 and B1. Then the next pair are A3 and B2, which are also invalid pairs. They never recover from being out of sync until the server process is killed and the TCP connections are all reset.
So the solution was to just change a timeout for waiting on the B socket from 200ms to 5s. Such a simple fix that had me scratching my head for days (and fixed it within 24 hours of putting it on stackoverflow)! I also made it recover from stray B connections by adding socket B to the main select() call, and then accept()ing it and close()ing it immediately (which would only happen if the B connection took longer than 5s to establish). Thanks #AlanCurry for the suggestion of adding it to the select() and adding the puzzle piece about the listen() backlog parameter being a hint.

Calculating socket upload speed

I'm wondering if anyone knows how to calculate the upload speed of a Berkeley socket in C++. My send call isn't blocking and takes 0.001 seconds to send 5 megabytes of data, but takes a while to recv the response (so I know it's uploading).
This is a TCP socket to a HTTP server and I need to asynchronously check how many bytes of data have been uploaded / are remaining. However, I can't find any API functions for this in Winsock, so I'm stumped.
Any help would be greatly appreciated.
EDIT: I've found the solution, and will be posting as an answer as soon as possible!
EDIT 2: Proper solution added as answer, will be added as solution in 4 hours.
I solved my issue thanks to bdolan suggesting to reduce SO_SNDBUF. However, to use this code you must note that your code uses Winsock 2 (for overlapped sockets and WSASend). In addition to this, your SOCKET handle must have been created similarily to:
SOCKET sock = WSASocket(AF_INET, SOCK_STREAM, IPPROTO_TCP, NULL, 0, WSA_FLAG_OVERLAPPED);
Note the WSA_FLAG_OVERLAPPED flag as the final parameter.
In this answer I will go through the stages of uploading data to a TCP server, and tracking each upload chunk and it's completion status. This concept requires splitting your upload buffer into chunks (minimal existing code modification required) and uploading it piece by piece, then tracking each chunk.
My code flow
Global variables
Your code document must have the following global variables:
#define UPLOAD_CHUNK_SIZE 4096
int g_nUploadChunks = 0;
int g_nChunksCompleted = 0;
WSAOVERLAPPED *g_pSendOverlapped = NULL;
int g_nBytesSent = 0;
float g_flLastUploadTimeReset = 0.0f;
Note: in my tests, decreasing UPLOAD_CHUNK_SIZE results in increased upload speed accuracy, but decreases overall upload speed. Increasing UPLOAD_CHUNK_SIZE results in decreased upload speed accuracy, but increases overall upload speed. 4 kilobytes (4096 bytes) was a good comprimise for a file ~500kB in size.
Callback function
This function increments the bytes sent and chunks completed variables (called after a chunk has been completely uploaded to the server)
void CALLBACK SendCompletionCallback(DWORD dwError, DWORD cbTransferred, LPWSAOVERLAPPED lpOverlapped, DWORD dwFlags)
{
g_nChunksCompleted++;
g_nBytesSent += cbTransferred;
}
Prepare socket
Initially, the socket must be prepared by reducing SO_SNDBUF to 0.
Note: In my tests, any value greater than 0 will result in undesirable behaviour.
int nSndBuf = 0;
setsockopt(sock, SOL_SOCKET, SO_SNDBUF, (char*)&nSndBuf, sizeof(nSndBuf));
Create WSAOVERLAPPED array
An array of WSAOVERLAPPED structures must be created to hold the overlapped status of all of our upload chunks. To do this I simply:
// Calculate the amount of upload chunks we will have to create.
// nDataBytes is the size of data you wish to upload
g_nUploadChunks = ceil(nDataBytes / float(UPLOAD_CHUNK_SIZE));
// Overlapped array, should be delete'd after all uploads have completed
g_pSendOverlapped = new WSAOVERLAPPED[g_nUploadChunks];
memset(g_pSendOverlapped, 0, sizeof(WSAOVERLAPPED) * g_nUploadChunks);
Upload data
All of the data that needs to be send, for example purposes, is held in a variable called pszData. Then, using WSASend, the data is sent in blocks defined by the constant, UPLOAD_CHUNK_SIZE.
WSABUF dataBuf;
DWORD dwBytesSent = 0;
int err;
int i, j;
for(i = 0, j = 0; i < nDataBytes; i += UPLOAD_CHUNK_SIZE, j++)
{
int nTransferBytes = min(nDataBytes - i, UPLOAD_CHUNK_SIZE);
dataBuf.buf = &pszData[i];
dataBuf.len = nTransferBytes;
// Now upload the data
int rc = WSASend(sock, &dataBuf, 1, &dwBytesSent, 0, &g_pSendOverlapped[j], SendCompletionCallback);
if ((rc == SOCKET_ERROR) && (WSA_IO_PENDING != (err = WSAGetLastError())))
{
fprintf(stderr, "WSASend failed: %d\n", err);
exit(EXIT_FAILURE);
}
}
The waiting game
Now we can do whatever we wish while all of the chunks upload.
Note: the thread which called WSASend must be regularily put into an alertable state, so that our 'transfer completed' callback (SendCompletionCallback) is dequeued out of the APC (Asynchronous Procedure Call) list.
In my code, I continuously looped until g_nUploadChunks == g_nChunksCompleted. This is to show the end-user upload progress and speed (can be modified to show estimated completion time, elapsed time, etc.)
Note 2: this code uses Plat_FloatTime as a second counter, replace this with whatever second timer your code uses (or adjust accordingly)
g_flLastUploadTimeReset = Plat_FloatTime();
// Clear the line on the screen with some default data
printf("(0 chunks of %d) Upload speed: ???? KiB/sec", g_nUploadChunks);
// Keep looping until ALL upload chunks have completed
while(g_nChunksCompleted < g_nUploadChunks)
{
// Wait for 10ms so then we aren't repeatedly updating the screen
SleepEx(10, TRUE);
// Updata chunk count
printf("\r(%d chunks of %d) ", g_nChunksCompleted, g_nUploadChunks);
// Not enough time passed?
if(g_flLastUploadTimeReset + 1 > Plat_FloatTime())
continue;
// Reset timer
g_flLastUploadTimeReset = Plat_FloatTime();
// Calculate how many kibibytes have been transmitted in the last second
float flByteRate = g_nBytesSent/1024.0f;
printf("Upload speed: %.2f KiB/sec", flByteRate);
// Reset byte count
g_nBytesSent = 0;
}
// Delete overlapped data (not used anymore)
delete [] g_pSendOverlapped;
// Note that the transfer has completed
Msg("\nTransfer completed successfully!\n");
Conclusion
I really hope this has helped somebody in the future who has wished to calculate upload speed on their TCP sockets without any server-side modifications. I have no idea how performance detrimental SO_SNDBUF = 0 is, although I'm sure a socket guru will point that out.
You can get a lower bound on the amount of data received and acknowledged by subtracting the value of the SO_SNDBUF socket option from the number of bytes you have written to the socket. This buffer may be adjusted using setsockopt, although in some cases the OS may choose a length smaller or larger than you specify, so you must re-check after setting it.
To get more precise than that, however, you must have the remote side inform you of progress, as winsock does not expose an API to retrieve the amount of data currently pending in the send buffer.
Alternately, you could implement your own transport protocol on UDP, but implementing rate control for such a protocol can be quite complex.
Since you don't have control over the remote side, and you want to do it in the code, I'd suggest doing very simple approximation. I assume a long living program/connection. One-shot uploads would be too skewed by ARP, DNS lookups, socket buffering, TCP slow start, etc. etc.
Have two counters - length of the outstanding queue in bytes (OB), and number of bytes sent (SB):
increment OB by number of bytes to be sent every time you enqueue a chunk for upload,
decrement OB and increment SB by the number returned from send(2) (modulo -1 cases),
on a timer sample both OB and SB - either store them, log them, or compute running average,
compute outstanding bytes a second/minute/whatever, same for sent bytes.
Network stack does buffering and TCP does retransmission and flow control, but that doesn't really matter. These two counters will tell you the rate your app produces data with, and the rate it is able to push it to the network. It's not the method to find out the real link speed, but a way to keep useful indicators about how good the app is doing.
If data production rate is bellow the network output rate - everything is fine. If it's the other way around and the network cannot keep up with the app - there's a problem - you need either faster network, slower app, or different design.
For one-time experiments just take periodic snapshots of netstat -sp tcp output (or whatever that is on Windows) and calculate the send-rate manually.
Hope this helps.
If your app uses packet headers like
0001234DT
where 000123 is the packet length for a single packet, you can consider using MSG_PEEK + recv() to get the length of the packet before you actually read it with recv().
The problem is send() is NOT doing what you think - it is buffered by the kernel.
getsockopt(sockfd, SOL_SOCKET, SO_SNDBUF, &flag, &sz));
fprintf(STDOUT, "%s: listener socket send buffer = %d\n", now(), flag);
sz=sizeof(int);
ERR_CHK(getsockopt(sockfd, SOL_SOCKET, SO_RCVBUF, &flag, &sz));
fprintf(STDOUT, "%s: listener socket recv buffer = %d\n", now(), flag);
See what these show for you.
When you recv on a NON-blocking socket that has data, it normally does not have MB of data parked in the buufer ready to recv. Most of what I have experienced is that the socket has ~1500 bytes of data per recv. Since you are probably reading on a blocking socket it takes a while for the recv() to complete.
Socket buffer size is the probably single best predictor of socket throughput. setsockopt() lets you alter socket buffer size, up to a point. Note: these buffers are shared among sockets in a lot of OSes like Solaris. You can kill performance by twiddling these settings too much.
Also, I don't think you are measuring what you think you are measuring. The real efficiency of send() is the measure of throughput on the recv() end. Not the send() end.
IMO.