Windows XP socket error with recv() - c++

I'm having a strange behaviour with the recv() function.
My C++ (MFC) application with WinSock implements a simple HTTP client (non-blocking socket) for accessing HTML pages on a web server. Some of these pages are taking a few seconds for loading. On Windows 7 this is not a problem, because recv() also returns partial data. But on Windows XP the recv() function always returns SOCKET_ERROR and the error code is WSAEWOULDBLOCK. Only when the connection is finished the data is returned in one access.
Does anyone know this problem? How can I force Windows XP to also receive partial data?
I setted the buffer size (SO_RCVBUF) to 1000 Bytes. On Windows 7 this is also reflected to the TCP Window Size - on XP not.
The real problem which I have with this issue is, that I don't know how to check if the connection is still alive or not. How can I check if a connection is still alive? Or how can I specify a timeout (max time between two received packets from the server)?

By default, a socket operates in blocking mode, so the only way you can get a WSAEWOULDBLOCK error at all is if you explicitly put the socket into non-blocking mode instead. Doing so, you agree to handle WSAEWOULDBLOCK (otherwise, don't use non-blocking mode).
WSAEWOULDBLOCK is not a real error, it is just an indication that the operation you attempted to perform cannot be completed at that moment because it would block the calling thread. You need to detect this "error" and simply retry the same operation again at a later time, preferably after a socket state change is detected.
For recv(), WSAEWOULDBLOCK simply means there is no data available on the socket to be read at that moment. In non-blocking mode, you should be using select() (or WSAEventSelect(), or WSAAsyncSelect(), or Overlapped I/O, or an I/O Completion Port) to detect inbound data before you then read it.
That being said, you are implementing an HTTP client, so you must follow the HTTP protocol properly, regardless of the socket I/O mode you are using, regardless of your socket buffer sizes. You must follow the pseudo code logic I outlined in this answer on another question:
You must follow the rules outlined in RFC 2616. Namely:
Read until the "\r\n\r\n" sequence is encountered. Do not read any more bytes past that yet.
Analyze the received headers, per the rules in RFC 2616 Section 4.4. They tell you the actual format of the remaining response data.
Read the data per the format discovered in #2.
Check the received headers for the presence of a Connection: close header if the response is using HTTP 1.1, or the lack of a Connection: keep-alive header if the response is using HTTP 0.9 or 1.0. If detected, close your end of the socket connection because the server is closing its end. Otherwise, keep the connection open and re-use it for subsequent requests (unless you are done using the connection, in which case do close it).
Process the received data as needed.
In short, you need to do something more like this instead (pseudo code):
string headers[];
byte data[];
string statusLine = read a CRLF-delimited line;
int statusCode = extract from status line;
string responseVersion = extract from status line;
do
{
string header = read a CRLF-delimited line;
if (header == "") break;
add header to headers list;
}
while (true);
if ( !((statusCode in [1xx, 204, 304]) || (request was "HEAD")) )
{
if (headers["Transfer-Encoding"] ends with "chunked")
{
do
{
string chunk = read a CRLF delimited line;
int chunkSize = extract from chunk line;
if (chunkSize == 0) break;
read exactly chunkSize number of bytes into data storage;
read and discard until a CRLF has been read;
}
while (true);
do
{
string header = read a CRLF-delimited line;
if (header == "") break;
add header to headers list;
}
while (true);
}
else if (headers["Content-Length"] is present)
{
read exactly Content-Length number of bytes into data storage;
}
else if (headers["Content-Type"] == "multipart/byteranges")
{
string boundary = extract from Content-Type header;
read into data storage until terminating boundary has been read;
}
else
{
read bytes into data storage until disconnected;
}
}
if (!disconnected)
{
if (responseVersion == "HTTP/1.1")
{
if (headers["Connection"] == "close")
close connection;
}
else
{
if (headers["Connection"] != "keep-alive")
close connection;
}
}
check statusCode for errors;
process data contents, per info in headers list;
As you can see, HTTP requires reading CRLF-delimited lines of text, or fixed lengths of raw bytes. To do that, you must call recv() in a loop until you encounter the terminating CRLF, or have received the expected number of bytes, whichever the case may be. Whether you use a synchronous loop that just ignores WSAEWOULDBLOCK errors while looping, or you use a state machine driven by asynchronous events/callbacks, that is up to you to decide. That doesn't change how you must process the HTTP protocol.
This applies to all versions of Windows (even all platforms that use BSD-style socket APIs). What you are encountering is not a Windows bug at all. It is an underlying flaw in your understanding of how to use socket I/O correctly and effectively.
As for checking if the connection is alive, recv() will return 0 if the server closed the connection gracefully, or will report an error otherwise (usually WSAECONNABORTED or WSAECONNRESET, though there can be others). But an abnormal disconnect may take a long time to detect, so you should implement timeouts in your code instead. In synchronous mode, you can use setsockopt(SO_RCVTIMEO). In non-blocking mode, you can use select(). In asynchronous (overlapped) mode, you can use WaitForSingleObject() on whatever event/object you use to drive your state machine.

You can't expect recv to give you any data on a non-blocking socket. If there's no data available it returns WOULDBLOCK. You just need to call recv again (normally after select notifies you some data is available). Whether you get data on the first (or any) call is going to depend on how fast the server is sending it.
When the socket is closed you'll get a different error from recv, like WSAECONNRESET or WSAENOTCONN. select will also notify you when the socket is closed.

It's very strange.
Today I have changed my software to use blocking sockets. But it still doesn't work on Windows XP. Windows 7 is no problem.
So I thought: Let's try another PC. On this PC (also Windows XP) it does work. Now I tried a 3rd PC with Windows XP and here it also works.
I still don't know what the problem is but I think there must be a bug with the PC.

Related

UnrealEngine4: Recv function would keep blocking when TCP server shutdown

I use a blocking FSocket in client-side that connected to tcp server, if there's no message from server, socket thread would block in function FScoket::Recv(), if TCP server shutdown, socket thread is still blocking in this function. but when use blocking socket of BSD Socket API, thread would pass from recv function and return errno when TCP server shutdown, so is it the defect of FSocket?
uint32 HRecvThread::Run()
{
uint8* recv_buf = new uint8[RECV_BUF_SIZE];
uint8* const recv_buf_head = recv_buf;
int readLenSeq = 0;
while (Started)
{
//if (TcpClient->Connected() && ClientSocket->GetConnectionState() != SCS_Connected)
//{
// // server disconnected
// TcpClient->SetConnected(false);
// break;
//}
int32 bytesRead = 0;
//because use blocking socket, so thread would block in Recv function if have no message
ClientSocket->Recv(recv_buf, readLenSeq, bytesRead);
.....
//some logic of resolution for tcp msg bytes
.....
}
delete[] recv_buf;
return 0
}
As I expected, you are ignoring the return code, which presumably indicates success or failure, so you are looping indefinitely (not blocking) on an error or end of stream condition.
NB You should allocate the recv_buf on the stack, not dynamically. Don't use the heap when you don't have to.
There is a similar question on the forums in the UE4 C++ Programming section. Here is the discussion:
https://forums.unrealengine.com/showthread.php?111552-Recv-function-would-keep-blocking-when-TCP-server-shutdown
Long story short, in the UE4 Source, they ignore EWOULDBLOCK as an error. The code comments state that they do not view it as an error.
Also, there are several helper functions you should be using when opening the port and when polling the port (I assume you are polling since you are using blocking calls)
FSocket::Connect returns a bool, so make sure to check that return
value.
FSocket::GetLastError returns the UE4 Translated error code if an
error occured with the socket.
FSocket::HasPendingData will return a value that informs you if it
is safe to read from the socket.
FSocket::HasPendingConnection can check to see your connection state.
FSocket::GetConnectionState will tell you your active connection state.
Using these helper functions for error checking before making a call to FSocket::Recv will help you make sure you are in a good state before trying to read data. Also, it was noted in the forum posts that using the non-blocking code worked as expected. So, if you do not have a specific reason to use blocking code, just use the non-blocking implementation.
Also, as a final hint, using FSocket::Wait will block until your socket is in a desirable state of your choosing with a timeout, i.e. is readable or has data.

TCP Socket - read most recent data from input queue [duplicate]

I've been reading through Beej's Guide to Network Programming to get a handle on TCP connections. In one of the samples the client code for a simple TCP stream client looks like:
if ((numbytes = recv(sockfd, buf, MAXDATASIZE-1, 0)) == -1) {
perror("recv");
exit(1);
}
buf[numbytes] = '\0';
printf("Client: received '%s'\n", buf);
close(sockfd);
I've set the buffer to be smaller than the total number of bytes that I'm sending. I'm not quite sure how I can get the other bytes. Do I have to loop over recv() until I receive '\0'?
*Note on the server side I'm also implementing his sendall() function, so it should actually be sending everything to the client.
See also 6.1. A Simple Stream Server in the guide.
Yes, you will need multiple recv() calls, until you have all data.
To know when that is, using the return status from recv() is no good - it only tells you how many bytes you have received, not how many bytes are available, as some may still be in transit.
It is better if the data you receive somehow encodes the length of the total data. Read as many data until you know what the length is, then read until you have received length data. To do that, various approaches are possible; the common one is to make a buffer large enough to hold all data once you know what the length is.
Another approach is to use fixed-size buffers, and always try to receive min(missing, bufsize), decreasing missing after each recv().
The first thing you need to learn when doing TCP/IP programming: 1 write/send call might take
several recv calls to receive, and several write/send calls might need just 1 recv call to receive. And anything in-between.
You'll need to loop until you have all data. The return value of recv() tells you how much data you received. If you simply want to receive all data on the TCP connection, you can loop until recv() returns 0 - provided that the other end closes the TCP connection when it is done sending.
If you're sending records/lines/packets/commands or something similar, you need to make your own protocol over TCP, which might be as simple as "commands are delimited with \n".
The simple way to read/parse such a command would be to read 1 byte at a time, building up a buffer with the received bytes and check for a \n byte every time. Reading 1 byte is extremely inefficient, so you should read larger chunks at a time.
Since TCP is stream oriented and does not provide record/message boundaries it becomes a bit more tricky - you'd
have to recv a piece of bytes, check in the received buffer for a \n byte, if it's there - append the bytes to previously received bytes and output that message. Then check the remainder of the buffer after the \n - which might contain another whole message or just the start of another message.
Yes, you have to loop over recv() until you receive '\0' or an
error happen (negative value from recv) or 0 from recv().
For the first option: only if this zero is part of your
protocol (the server sends it). However from your code it seems that
the zero is just to be able to use the buffer content as a
C-string (on the client side).
The check for a return value of 0 from recv:
this means that the connection was closed (it could be part
of your protocol that this happens.)

Winsock2 tcp/ip - some data packets are ignored probably due to null terminator from the previous packet

I wrote a simple client-server program. Network.h is a header file which uses Winsock2.h (TCP/IP mode) to create socket, accept/connect in blocking mode, send/recv in non-blocking mode. I made it so that the function string TNetwork::Recv(int size) will return the string "Nothing" if it gets WSAWOULDBLOCK error (no data is received yet)
Here is my main function:
int main(){
string Ans;
TNetwork::StartUp(); //WSA start up, etc
cin >> Ans;
if (Ans == "0"){ // 0 --> server
TNetwork::SetupAsServer(); //accept connection (in blocking mode!)
while (true){
TNetwork::Send("\nAss" + '\0'); //without null terminator, the client may read extra bytes, causing undefined behavior (?)
TNetwork::Send("embly" + '\0');
cin >> Ans;
}
}
else{ // others --> regard Ans as IP address. e.g. I can type "127.0.0.1"
TNetwork::SetupAsClient(Ans);
string Rec;
while (true){
Rec = TNetwork::Recv(1000);
if (Rec != "Nothing"){
cout << Rec;
}
}
}
system("PAUSE");
}
Supposedly, the client would print "Assembly" when connected, and when the server enters anything to its console window. Sometimes, though, the client would only print out "\nAss" in the console without the "embly.
To my understanding, TCP/IP ensures all data to be sent and in the correct order, so I guess what happens is that both packets arrive at the same time, which happen quite often over the unstable internet. And due to this null terminator, the client would ignore the "embly", since the Recv() function stopped reading when it hits a null terminator.
So, how can I ensure that the client will always read all data packets correctly?
Yes, the network stack will send the data in the correct order and doesn't care what termination type you use. This has to do with how you're receiving and processing the data stream (note: not packets, stream). If you receive all 11 bytes and print it to the screen, the print function will stop when it reaches the zero, but the rest of the data is still there.
Note: since it's a stream, what happens if you received only 10 bytes of data from the stream? You need to scan what you receive for the zero to know if you've received a full "zero-terminated string" if that's how you want to communicate your data.
EDIT: Also, I don't think "\nAss" + '\0' is doing what you think it is. Instead of adding a 0 character to the end of the string (which already has one, by the way), it's adding 0 to your string pointer.
As #mark points out, TCP is all about streams, not packets. TCP takes care of ensuring that data is reliably transmitted from A to B and that the data is delivered to the consumer in the order in which it was transmitted. Yes, the data is packetized on the wire, but the TCP stack on the system takes those packets and builds the stream which it makes available to you through the recv() function. The TCP stack handles out-of-order data, missing data, and duplicated data such that by the time your application sees it, the stream is a mirror-copy of when the sender sent.
To properly receive TCP data, you will typically need some kind of loop that reads data from the socket when it becomes available. The way I normally do this is to have a thread that is dedicated to servicing the socket. In the thread function is a loop that reads data from the socket when it becomes available and is idle otherwise. This loop reads data into a buffer of, say, 1 KB. Once the data is received from the socket into this buffer, the buffer is copied to another thread for processing. In the thread function for the processing thread is a loop that receives the 1 KB buffers from the socket thread and adds them to the back end of a master buffer of, say, 1 MB. The processing thread then processes the messages out of this master buffer and makes them available to the application.
For a simple demo application, two threads may be overkill. The two threads I've described could be certainly be combined into one, but for my application, it is more efficient to have two threads and take advantage of the multiple cores on my system. The point is, if you're going to have a front-end UI, there's not going to be a way around using at least one thread and still have the UI be responsive.
One other thing. There are two commonly-used mechanisms for protocol design. You're using one, namely, a marker (e.g., a null terminator, etc.) to signal the begin/end of a message. I don't prefer this mechanism mainly because the marker may actually need to be part of the message at some point. The other mechanism is to have a header on each message that tells, at a minimum, how long the message is. I prefer this mechanism and include in my headers a sync word and the message type as well. For example,
struct Header
{
__int16 _sync; // a hex pattern, e.g., 0xABCD
__int16 _type;
__int32 _length;
}
That's a total of 8 bytes. So when processing from the master buffer, I read the first 8 bytes, verify the sync word, and get the length. I determine if there are 'length' bytes available in the master buffer. If not, I have to wait until the socket thread provides me more data before checking again. If so, I extract 'length' bytes from the master buffer and pass that to an object created according to the specified type, which knows how to interpret that particular message. Then repeat.
As I mentioned, I use a master buffer of 1 MB or so. As messages are processed, it is important to remove them from the master buffer so there is additional space available for new data on the back end. This involves simply copying the unprocessed data, if any, to the beginning of the buffer. In cases where data comes in faster than you can process it, the master buffer may need the ability to resize itself to accommodate the additional data.
I hope that's not overwhelming. Start simple and add as you go.

Asynchronous libpcap: losing packets?

I have a program that sends a set of TCP SYN packets to a host (using raw sockets) and uses libpcap (with a filter) to obtain the responses. I'm trying to implement this in an asynchronous I/O framework, but it seems that libpcap is missing some of the responses (namely the first packets of a series when it takes less than 100 microseconds between the TCP SYN and the response). The pcap handle is setup like this:
pcap_t* pcap = pcap_open_live(NULL, -1, false, -1, errorBuffer);
pcap_setnonblock(pcap, true, errorBuffer);
Then I add a filter (contained on the filterExpression string):
struct bpf_program filter;
pcap_compile(pcap, &filter, filterExpression.c_str(), false, 0);
pcap_setfilter(pcap, &filter);
pcap_freecode(&filter);
And on a loop, after sending each packet, I use select to know if I can read from libpcap:
int pcapFd = pcap_get_selectable_fd(pcap);
fd_set fdRead;
FD_ZERO(&fdRead);
FD_SET(pcapFd, &fdRead);
select(pcapFd + 1, &fdRead, NULL, NULL, &selectTimeout);
And read it:
if (FD_ISSET(pcapFd, &fdRead)) {
struct pcap_pkthdr* pktHeader;
const u_char* pktData;
if (pcap_next_ex(pcap, &pktHeader, &pktData) > 0) {
// Process received response.
}
else {
// Nothing to receive (or error).
}
}
As I said before, some of the packets are missed (falling into the "nothing to receive" else). I know these packets are there, because I can capture them on a synchronous fashion (using tcpdump or a thread running pcap_loop). Am I missing some detail here? Or is this an issue with libpcap?
If the FD for the pcap_t is reported as readable by select() (or poll() or whatever call/mechanism you're using), there is no guarantee that this means that only one packet can be read without blocking.
If you use pcap_next_ex(), you will read only one packet; if there's more than one packet available to be read, then, if you do another select(), it should immediately return, reporting the FD as being readable again, in which case you'll presumably call pcap_next_ex() again, and so on. This means at least one system call per packet (the select()), and possibly more calls, depending on what version of what OS you're doing and what version of libpcap you have.
If, instead, you were to call pcap_dispatch(), with a packet-count argument of -1, that call will return all the packets that can be obtained with a single read operation and process all of them, so, on most platforms, you may get multiple packets with one or two system calls if there are multiple packets available (which, with high network traffic, as you might get if you're testing your program with a SYN flood, is likely to be the case).
In addition, on Linux systems that support memory-mapped packet capture (I think all 2.6 and later kernels do, and most if not all 2.4 kernels do), and with newer versions of libpcap, pcap_next_ex() has to make a copy of the packet to avoid having the kernel change the packet out from under the code processing the packet and to avoid "locking up" a slot in the ring buffer for an indefinite period of time, so there's an extra copy involved.
This seem to be an issue with libpcap using memory mapping under Linux. Please see my other question for details.

Server's NonBlocking TCP socket taking time to stream content

Problem
- I am working on a Streaming server & created a nonblocking socket using:
flag=fcntl(m_fd,F_GETFL);
flag|=O_NONBLOCK;
fcntl(m_fd,F_SETFL,flag);
Server then sends the Media file contents using code:
bool SendData(const char *pData,long nSize)
{
int fd=m_pSock->get_fd();
fd_set write_flag;
while(1)
{
FD_ZERO(&write_flag);
FD_SET(fd,&write_flag);
struct timeval tout;
tout.tv_sec=0;
tout.tv_usec=500000;
int res=select(fd+1,0,&write_flag,0,&tout);
if(-1==res)
{
print("select() failure\n");
return false;
}
if(1==res)
{
unsigned long sndLen=0;
if(!m_pSock->send(pData,nSize,&sndLen))
{
print(socket send() failure\n");
return false;
}
nSize-=sndLen;
if(!nSize)
return true; //everything is sent
}
}
}
Using above code, I am streaming a say 200sec audio file, which I expect that Server should stream it in 2-3secs using full n/w available bandwidth(Throttle off), but the problem is that Server is taking 199~200secs to stream full contents.
While debugging, I commented the
m_pSock->send()
section & tried to dump the file locally. It takes 1~2secs to dump the file.
Questions
- If I am using a NonBlocking TCP socket, why does send() taking so much time?
Since the data is always available, select() will return immediately (as we have seen while dumping the file). Does that mean send() is affected by the recv() on the client side?
Any inputs on this would be helpul. Client behavior is not in our scope.
Your client is probably doing some buffering to avoid network jitter, but it is likely still playing the audio file in real time. So, the file transfer rate is matched to the rate that the client is consuming the data. Since it is a 200 second audio file, it will take about 200 seconds to complete the transfer.
Because TCP output and input buffers are propably much smaller than the audio file, reading speed of the receiving application can slow down the sending speed.
When both the TCP output buffer of sender and the input buffer of receiver are both full, TCP stack of the sender is not able to receive any data from the sender. So sending will be blocked, until there is space.
If the receiver reads the TCP stream same speed as data is needed for playing. Then the transfer takes about 200 seconds. Or little bit less.
This can be avoided by using application layer buffering in the receiving end.
The problem could be that if the client side is using blocking TCP, plus is processing all the data on a single thread with no no buffer/queue etc right through to the "player" of the file, then your side being non-blocking will only speed things until you reach the point where the TCP/IP protocol stack buffers, NIC buffers etc are full. Then you will ultimately still only be able to send data as fast as the client side is consuming it. Remember TCP is a reliable, point-to-point protocol.
Where does your client code come from in your testing? Is it some sort of simple test client someone has written?