setsockopt TCP_NODELAY question on Windows Mobile - c++

I have a problem on Windows Mobile 6.0.
I would like to create a TCP connection which does not
use the Nagle algorithm, so it sends my data when I call
"send" function, and does not buffer calls, having too
small amount of data.
I tried the following:
BOOL b = TRUE;
setsockopt(socketfd, IPPROTO_TCP, TCP_NODELAY, (char*)(&b), sizeof(BOOL));
It works fine on desktop. But on Windows Mobile, if I
set this value, than I make a query for it, the returned
value is 8. And the network traffic analysis shows that the
nothing changed.
Is there any way to force a flush to my socket?

It seems to me that TCP_NODELAY option is not supported for windows mobile edition. Check the MSDN documentation, it might have something to that effect, but I remember a while back struggling with setting few socket options including TCP_NODELAY and setting send and receive buffer, and the setsockopt call would fail. Check that setsockopt returns false, if not, get the ::WSAGetLastError() and see if that leads you anywhere. In my case, I remember having to do without these options as they were not supported. I was working on Windows Mobile 5.

Are you setting the option on both ends of the connection and after the connection has been established? I just had someone test it and it worked fine with TCP over ActiveSync, significantly improving the command-response cycle time in the test app (about a 4x improvement in fact).

The server is given I cannot modify it, but i.e.: our Symbian client
works fine with it, with this option.
I tried setting this option both before and after creating the connection
but nothing changed.
I use TCP over Windows Mobile Device Center(since I use Vista).

It just occurred to me, (this is a wild guess and probably not likely) but maybe you're having a delayed ack problem due to your send buffer being smaller than the size of the data you're writing. Nagle may have nothing to do with it.
Does the receiving side send any data back immediately? If not, your peer will delay it's ack for up to 200ms waiting to piggy back it's ack on some data to make better use of bandwidth.
When the send buffer on the socket is smaller than the data in this case the call to write will block until the ack has been received and all the data sent.
For example if your send buffer is 8192 bytes and you send 8193 bytes and your peer sends no data back then your write will block for 200ms ( or however long your peers implementation delays the ack) effectively making it look like Nagle is killing you even when it's disabled.
If this is the case you could either increase the send buffer size or have your peer always send you back a null byte to force the ack to be sent immediately.
Otherwise, I would maybe try playing around with NTttcp_x86 a bit to model your applications send / receive patterns and see if maybe something else is going on.

Related

QTcpSocket: Setting LowDelayOption seems to have no effect?

I have a Qt GUI application that uses QTcpSocket to send and receive TCP packets to and from a server. So far I've had success making the TCP socket connections (there are 2 separate socket connections because there are 2 different message sets. Same IP address for both but 2 different port numbers) and sending and receiving packets. Most of the messages that my application sends are kicked off via push-button on the GUI's main window (one message is sent periodically using a QTimer that expires every 1667ms).
The server has a FIFO (128 messages deep) and sends a specific message to my application that communicates when the FIFO is 1/2 full, 3/4 full, and full. It's tedious to test this functionality by just mashing the send button on the GUI so I had the idea of loading a .csv file that could be pre-filled (the message has several different configurable parameters) with what I want to send. Each line gets read and turned into a message and sent on the TCP socket.
From my main window I open up a QFileDialog when a push-button on the GUI is clicked. Then when a .csv file is navigated to and selected the function reads the .csv file one line at a time, pulls out all the individual parameters, fills the message with the parameters, and then sends it out to the socket. Each message is 28 bytes. It repeats this until there are no lines left in the .csv file.
What I am noticing on Wireshark is that instead of sending a bunch of individual TCP packets they are all being put together and sent as one large TCP packet.
When I first tested this out I did not know about the LowDelayOption so when I found the information about it in the documentation for QAbstractSocket I thought "Aha! That must be it! The solution to my problem!" but when I added it to my code it did not seem to have any kind of effect at all. It's still being sent as one large TCP packet. For each socket, I am calling setSocketOption to set the LowDelayOption to 1 in the slot function that receives the connected() signal from the socket. I thought maybe the setSocketOption call wasn't working so I checked this by calling socketOption to get the value of the LowDelayOption and it's 1.
Is there something else I need to be doing? Am I doing something wrong?
Thanks for your time and your help. If it matters I am developing this on Windows and I am using Qt 5.9.1
... send and TCP packets to and from a server.
From this I am getting the vibe that your application relies on a certain amount of data - 'a packet' being received in a single receive call.
You can't really rely on that. Data you send over TCP can also be fragmented on the way. Also in your receiving end TCP implementation multiple packets received from the network may be put in the receiving sockets buffer before you have read the first one, and you have no way of telling which kind of fragments they were originally sent in.
So you should just treat TCP as a pipe through which bytes of data flow with some unknown and potentially variable delay. That variable delay causes data to be received in bigger or smaller chunks at random.
If you want to have a packet structure, you should add a packet header containing at least the packet length to the data you transmit.
I hope this helps.
From QTcpSocket documentation:
TCP (Transmission Control Protocol) is a reliable, stream-oriented, connection-oriented transport protocol. It is especially well suited for continuous transmission of data.
Stream-orientet means that there is no something like datagrams in UDP sockets.
There is only stream of data, and you never know in what parts it will be sent.
TCP protocol gives only reliability and you have to provide message extraction on your own. I.e send message length before each message, or use QDataStream (check
Fortune server and Fortune client for examples).
LowDelayOption from QAbstractSocket::SocketOption
Try to optimize the socket for low latency. For a QTcpSocket this would set the TCP_NODELAY option and disable Nagle's algorithm. Set this to 1 to enable.
It is equavilent of setsockopt with TCP_NODELAY option
First thing is:
The TCP_NODELAY option is specific to TCP/IP service providers.
And it doesn't work for me too :)
MSDN says that they do not recommend to disable Nagle's algorithm:
It is highly recommended that TCP/IP service providers enable the Nagle Algorithm by default, and for the vast majority of application protocols the Nagle Algorithm can deliver significant performance enhancements. However, for some applications this algorithm can impede performance, and TCP_NODELAY can be used to turn it off. These are applications where many small messages are sent, and the time delays between the messages are maintained. Application writers should not set TCP_NODELAY unless the impact of doing so is well-understood and desired because setting TCP_NODELAY can have a significant negative impact on network and application performance.
The question is: Do you really need to send your messages as fast as possible?
If yes, consider using QUdpSocket. Maybe tell us more about messages that you are sending.

Send/Recv Socket Blocking Issues

another question about my beloved sockets.
I'll first explain what my case is. After that I will tell you whats bothering me.
I have a client and a server. Both Applications are written in C++ with the winsock2 implementation. The connection runs over TCP and WLAN. WLan is very important, because its probably causing the issue and is definetly going to be the communicationchannel.
I'm connecting two sockets to the server. A SendSocket and a ReceiveSocket. I'm constantly sending video data to the server through the sendsocket. The data is processed and gets send back to the client and gets displayed. Each socket got his own thread.
The Videodata is encoded, so I achieve like 500kB/s. Lets see this rate as fixed, without explanation.
Perfect communication viewed by the client:
Send Data
Recv Data
Send Data
Recv Data
...
This is for like 100 frames the case.
But every couple of frames, the stream freezes for like 4 frames and continues after that. (4 frames are like 500ms)
Thats the issue, i'm facing.
What happens to the stream is the following:
Send Data
Recv Data
Send Data
Send Data
Send Data1 -> blocked send
Recv Data
Recv Data
Send Data2 -> not blocked anymore.
The Data gets properly sent on server side.
Since WLan is not duplex (as far as I know), I thought, that the send calls are prioritized for some reason. And after that the Receive calls are prioritized, so the send call blocks until the recv calls are done.
Maybe you can tell me, what is happening in the lower layer, which could cause the problem.
Btw. I'm definetly not sure, if its not just a bandwidth issue, but I thought WLAN should be able to handle 500kB/s. This 500kB/s are both upstream and downstream together.
Important notice: If I set the framerate to a factor of 1/5, it does not fix the issue.
I know it's hard to fix this issue with this insight. I would be happy, if you could share your knowledge, so I may be able to fix it myself.
EDIT: Its perfectly fine, if the client recv hangs a litte. But it must not block the send. The server needs data continuosly.
A blocked send means either that the socket send buffer is full, which means either (a) that the socket receiver buffer at the receiver is full, which means the receiver isn't reading as fast as you're sending; or else (b) that there are network losses that are causing the sender to retry. In either case there is nothing you can do about it at the sending end.
Someone is bound to mention non-blocking I/O as a solution, but it isn't: at the point where a blocking sender blocks, a non-blocking sender will get -1 from send() witch 'errno == EAGAIN/EWOULDBLOCK', which doesn't solve the actual problem at all.
Alright then. It was definetly a wlan issue. I tested over the eduroam wlan at my university. I don't know, if anybody knows it. Now I tested it with a simple router and it worked fine. Seems like the eduroam wlan does have some trouble with bandwidth or direction changes. I won't look into that...

Winsock send() issue with single byte transmissions

I'm experiencing a frustrating behaviour of windows sockets that I cant find any info on, so I thought I'd try here.
My problem is as follows:
I have a C++ application that serves as a device driver, communicating with a serial device connected
through a serial to TCP/IP converter.
The serial protocol requires a lot of single byte messages to be communicated between the device and
my software. I noticed that these small messages are only sent about 3 times after startup, after which they are no longer actually transmitted (checked with wireshark). All the while, the send() method keeps returning > 0, indicating that the message has been copied to it's send buffer.
I'm using blocking sockets.
I discovered this issue because this particular driver eventually has to drop it's connection when the send buffer is completely filled (select() fails due to this after about 5 hours, but it happens much sooner when I reduce SO_SNDBUF size).
I checked, and noticed that when I call send with messages of 2 bytes or larger, transmission never fails.
Any input would be very much appreciated, I am out of ideas how to fix this.
This is a rare case when you should set TCP_NODELAY so that the sends are written individually, not coalesced. But I think you have another problem as well. Are you sure you're reading everything that's being sent back? And acting on it properly? It sounds like an application protocol problem to me.

Forced server-side socket close without SO_LINGER > 0 can lose data, right?

I'm writing a cross-platform client application that uses sockets, written in C++. I'm having problems where the server is doing a hard close on the socket when it's done sending me info.
I've been reading other posts on this topic, and I'm not so much interested in the rights or wrong of this approach, but it's seems the server is either explicitly setting SO_LINGER=0, or that's the default behavior on that system (not sure, it's a Linux box).
I can see (in Wireshark) that the data was sent to me followed within milli-seconds by an RST, indicating a hard close by the server. I personally don't agree with this approach as it should be up to the client to shutdown the socket.
Server team are saying there's nothing wrong with that approach (doing a hard close rather than shutdown), it's typical on servers to avoid accumulating TIMED_WAIT sockets. On Windows my select() returns indicating there's something to read (while I haven't read any of this "in transit" data yet).
However, because of the quick arrival of the RST, on Windows recv() returns -1 and I'm seeing a 10054 for the error code (connection reset by peer). This wouldn't be too bad if I could at least get the data that was sent, but it seems that once my client's socket stack sees the RST any unread bytes are no longer made available to me.
On Linux (client), there's no problem. It seems the TCP stack is behaving slightly differently, in that I can read the outstanding bytes before the RST is honoured. I'm having trouble convincing the server guys they have a bug, given that it works for a Linux client.
First off, am I correct? Is this a server-side issue? I can't see that the client end is doing anything wrong, so it must be right?
It seems the server team are adamant that they want to perform the close, and they don't want to in have TIMED_WAITs, so I was going to push for them to add a SO_LINGER of, say 2 seconds? Does that sound like it will solve my problem? From what I understand this will stop the server from sending out a RST so soon after sending data, and should give me a chance to read the outstanding bytes.
Found a definitive answer to my own question:
"...Upon reception of RST segment, the receiving side will immediately abort the connection. This statement has more implications than just meaning that you will not be able to receive or send any more data to/from this connection. It also implies that any unread data still in the TCP reception buffer will be lost..." It cites the book "TCP/IP Internetworking Volume II". I don't have that book, so I can only take his word for it. Doesn't seems to discard data on Linux, only Windows...
Olivier Langlois's blog
The side-effect of fiddling with SO_LINGER to force a reset is that all pending data is lost. The fact that you don't receive it is all the proof you need that the server team is wrong to do this.
RFC 793 cited below says 'this command [ABORT] causes all pending SENDs and RECEIVEs to be aborted, ... and a special RESET message to be sent to the TCP on the other side of the connection.' See also W.R. Stevens, TCP/IP Illustrated, Vol. 1, p. 287: 'Aborting a connection provides two features to the application: (1) any queued data is thrown away and the reset is sent immediately, and (2) the receiver of the RST can tell that the other end did an abort instead of a normal close'. There is similar wording, along with an extract from the BSD code that implements it, in Vol. 2.
The TIME_WAIT state only occurs on a socket which sends a FIN before it has received one: see RFC 793. So the server should be waiting for a FIN from the client, with a suitable timeout, rather than resetting. This will also permit the client to do connection pooling.

Send buffer empty of Socket in Linux?

Is there a way to check if the send buffer of an TCP Connection is completely empty?
I haven't found anything until now and just want to make sure a connection is not closed by my server while there are still data being transmitted to a certain client.
I'm using poll to check if I'm able to send data on a non-blocking socket. But by that I'm not able to find out if EVERYTHING has been sent in buffer, am I?
In Linux, you can query a socket's send queue with ioctl(sd, SIOCOUTQ, &bytes). See man ioctl for details.
The information is not completely reliable in the sense that it is possible that the data has been received by the remote host, since the buffer cannot be emptied until an ACK is received. You probably should not use it to add another level of flow-control on top of TCP.
If the remote host actually closes the connection (or half-closes it), then the socket become unwriteable, regardless of how much data might have been in the buffer. You can detect this condition by writing 0 bytes to the socket.
The more difficult (and often more likely) condition is the remote host becoming unreachable, because of network issues or because it crashes. In that case, data will pile up in the send buffer, but that can also happen because the remote host's receive buffer is full (perhaps because the process reading the buffer doesn't have enough resources to process its input). In the case of network routing issues, you might get a router notification (an ICMP error), which should make the socket unwritable; unfortunately, there are many network errors which just result in black holes.