How to know if the TCP transmission succeded with netconn LWIP? - c++

I'm using the API netconn of LWIP with a STM32F746NNG-NUCLEO.
I can send some data over TCP/IP using netconn_write() in the sunny case (no disconnection, perfect weather for 4G modem). But, in a case where the transmission failed, how can I know that it failed?
Because, at the moment, when I simulate a disconnection of my server, netconn_write() always returns an OK, which is clearly not right because data had never been received by the server.
Does anybody know how to fix it?

Related

how to keep UDP socket connection open between 2 hosts

I'm working on a simple chatroom based on C++ and UDP, and I'm using this as a base. Every time client-server are saying "hello" to each other, both of them are ending their processes and nothing else, but I'd like to keep the socket open after that, so I can send something else and/or something like that, but haven't found a way to do so, so how do I do such thing? Haven't found much info on what I need, so any help appreciated. Thanks in advance.
You don't need to send a pulse or a heartbeat to keep the socket open. The socket will remain open as long as the program is running or you call close on it.
You can wrap your send and receive in an infinite loop but you should note that the example code you linked to is waaaay too simple for a chat client: you will need to handle errors like the underlying connection being offline ( for example, the interface being disconnected/ brought down, when the send and recv calls will return an error with associated errno ). You should look into using the select, poll and epoll system calls to detect errors and deal with them.

Send/Recv Socket Blocking Issues

another question about my beloved sockets.
I'll first explain what my case is. After that I will tell you whats bothering me.
I have a client and a server. Both Applications are written in C++ with the winsock2 implementation. The connection runs over TCP and WLAN. WLan is very important, because its probably causing the issue and is definetly going to be the communicationchannel.
I'm connecting two sockets to the server. A SendSocket and a ReceiveSocket. I'm constantly sending video data to the server through the sendsocket. The data is processed and gets send back to the client and gets displayed. Each socket got his own thread.
The Videodata is encoded, so I achieve like 500kB/s. Lets see this rate as fixed, without explanation.
Perfect communication viewed by the client:
Send Data
Recv Data
Send Data
Recv Data
...
This is for like 100 frames the case.
But every couple of frames, the stream freezes for like 4 frames and continues after that. (4 frames are like 500ms)
Thats the issue, i'm facing.
What happens to the stream is the following:
Send Data
Recv Data
Send Data
Send Data
Send Data1 -> blocked send
Recv Data
Recv Data
Send Data2 -> not blocked anymore.
The Data gets properly sent on server side.
Since WLan is not duplex (as far as I know), I thought, that the send calls are prioritized for some reason. And after that the Receive calls are prioritized, so the send call blocks until the recv calls are done.
Maybe you can tell me, what is happening in the lower layer, which could cause the problem.
Btw. I'm definetly not sure, if its not just a bandwidth issue, but I thought WLAN should be able to handle 500kB/s. This 500kB/s are both upstream and downstream together.
Important notice: If I set the framerate to a factor of 1/5, it does not fix the issue.
I know it's hard to fix this issue with this insight. I would be happy, if you could share your knowledge, so I may be able to fix it myself.
EDIT: Its perfectly fine, if the client recv hangs a litte. But it must not block the send. The server needs data continuosly.
A blocked send means either that the socket send buffer is full, which means either (a) that the socket receiver buffer at the receiver is full, which means the receiver isn't reading as fast as you're sending; or else (b) that there are network losses that are causing the sender to retry. In either case there is nothing you can do about it at the sending end.
Someone is bound to mention non-blocking I/O as a solution, but it isn't: at the point where a blocking sender blocks, a non-blocking sender will get -1 from send() witch 'errno == EAGAIN/EWOULDBLOCK', which doesn't solve the actual problem at all.
Alright then. It was definetly a wlan issue. I tested over the eduroam wlan at my university. I don't know, if anybody knows it. Now I tested it with a simple router and it worked fine. Seems like the eduroam wlan does have some trouble with bandwidth or direction changes. I won't look into that...

Forced server-side socket close without SO_LINGER > 0 can lose data, right?

I'm writing a cross-platform client application that uses sockets, written in C++. I'm having problems where the server is doing a hard close on the socket when it's done sending me info.
I've been reading other posts on this topic, and I'm not so much interested in the rights or wrong of this approach, but it's seems the server is either explicitly setting SO_LINGER=0, or that's the default behavior on that system (not sure, it's a Linux box).
I can see (in Wireshark) that the data was sent to me followed within milli-seconds by an RST, indicating a hard close by the server. I personally don't agree with this approach as it should be up to the client to shutdown the socket.
Server team are saying there's nothing wrong with that approach (doing a hard close rather than shutdown), it's typical on servers to avoid accumulating TIMED_WAIT sockets. On Windows my select() returns indicating there's something to read (while I haven't read any of this "in transit" data yet).
However, because of the quick arrival of the RST, on Windows recv() returns -1 and I'm seeing a 10054 for the error code (connection reset by peer). This wouldn't be too bad if I could at least get the data that was sent, but it seems that once my client's socket stack sees the RST any unread bytes are no longer made available to me.
On Linux (client), there's no problem. It seems the TCP stack is behaving slightly differently, in that I can read the outstanding bytes before the RST is honoured. I'm having trouble convincing the server guys they have a bug, given that it works for a Linux client.
First off, am I correct? Is this a server-side issue? I can't see that the client end is doing anything wrong, so it must be right?
It seems the server team are adamant that they want to perform the close, and they don't want to in have TIMED_WAITs, so I was going to push for them to add a SO_LINGER of, say 2 seconds? Does that sound like it will solve my problem? From what I understand this will stop the server from sending out a RST so soon after sending data, and should give me a chance to read the outstanding bytes.
Found a definitive answer to my own question:
"...Upon reception of RST segment, the receiving side will immediately abort the connection. This statement has more implications than just meaning that you will not be able to receive or send any more data to/from this connection. It also implies that any unread data still in the TCP reception buffer will be lost..." It cites the book "TCP/IP Internetworking Volume II". I don't have that book, so I can only take his word for it. Doesn't seems to discard data on Linux, only Windows...
Olivier Langlois's blog
The side-effect of fiddling with SO_LINGER to force a reset is that all pending data is lost. The fact that you don't receive it is all the proof you need that the server team is wrong to do this.
RFC 793 cited below says 'this command [ABORT] causes all pending SENDs and RECEIVEs to be aborted, ... and a special RESET message to be sent to the TCP on the other side of the connection.' See also W.R. Stevens, TCP/IP Illustrated, Vol. 1, p. 287: 'Aborting a connection provides two features to the application: (1) any queued data is thrown away and the reset is sent immediately, and (2) the receiver of the RST can tell that the other end did an abort instead of a normal close'. There is similar wording, along with an extract from the BSD code that implements it, in Vol. 2.
The TIME_WAIT state only occurs on a socket which sends a FIN before it has received one: see RFC 793. So the server should be waiting for a FIN from the client, with a suitable timeout, rather than resetting. This will also permit the client to do connection pooling.

Facing an issue with recv() and send() winsock api. Recv() hangs while receving the last packet

I am facing an issue with recv() and send() winsock api. Recv() hangs while receving the last packet.
Problem Description:-
System A's app is writing data over a non-blocking socket and system B's app is receiving data over a blocking socket in chunks of 64k.
It seems that while reading probably the last packet of 64k, which may less than or equal to 64k, the receive freezes. I am not sure if the receive of the last packet or send of the last packet is an issue, but I am observing this issue intermittently in our legacy applications.
Has anyone faced a similar issue before? If yes, then can please provide your inputs.
If not, then can you please provide some trouble-shooting techniques to narrow down to the root cause.
Just for information I have win2k3 servers.
Thanks,
Varun
Wireshark is a great tool for troubleshooting networking code. It'll show you exactly what packets are entering and leaving your network interface in near real time.
As to your specific issue: are you saying that the last chunk of data might be shorter than 64k? If so, your protocol should include some message length information so the receiver
knows how much data to look for.
A couple of guesses...
If you are using UDP, perhaps one or more packets are being dropped en route (which UDP is permitted to do whenever it feels like). In that case, your receiver might end up waiting for data that is simply never going to arrive; to fix this you would need to either implement some way of automatically resending the lost data, or (if you don't strictly need all the data), some way for the sender to notify the receiver that he is done transmitting, so the receiver might as well stop waiting. (of course you would need to handle the case where this notification gets dropped, as well... it can get complicated if you want 100% robustness)
If you are using TCP, perhaps you are not carefully checking the values returned by send() on the sending side? If you are assuming that send() will always send the number of bytes you asked it to, you might end up thinking send() sent all the bytes when in fact it only sent some (or none) of them... so the sender would think the transmission was complete, while the receiver would be stuck waiting for data that isn't going to arrive.
You might have a problem with the server sending data down the wire faster than the receiver is able to read it. You could try increasing the receive buffer:
int nSocketBuffer = 131072; // 128k
if (setsockopt(m_sSocket,SOL_SOCKET,SO_RCVBUF,(LPCSTR)&nSocketBuffer,sizeof(int)) == SOCKET_ERROR)
{
// socket error
return false;
}

setsockopt TCP_NODELAY question on Windows Mobile

I have a problem on Windows Mobile 6.0.
I would like to create a TCP connection which does not
use the Nagle algorithm, so it sends my data when I call
"send" function, and does not buffer calls, having too
small amount of data.
I tried the following:
BOOL b = TRUE;
setsockopt(socketfd, IPPROTO_TCP, TCP_NODELAY, (char*)(&b), sizeof(BOOL));
It works fine on desktop. But on Windows Mobile, if I
set this value, than I make a query for it, the returned
value is 8. And the network traffic analysis shows that the
nothing changed.
Is there any way to force a flush to my socket?
It seems to me that TCP_NODELAY option is not supported for windows mobile edition. Check the MSDN documentation, it might have something to that effect, but I remember a while back struggling with setting few socket options including TCP_NODELAY and setting send and receive buffer, and the setsockopt call would fail. Check that setsockopt returns false, if not, get the ::WSAGetLastError() and see if that leads you anywhere. In my case, I remember having to do without these options as they were not supported. I was working on Windows Mobile 5.
Are you setting the option on both ends of the connection and after the connection has been established? I just had someone test it and it worked fine with TCP over ActiveSync, significantly improving the command-response cycle time in the test app (about a 4x improvement in fact).
The server is given I cannot modify it, but i.e.: our Symbian client
works fine with it, with this option.
I tried setting this option both before and after creating the connection
but nothing changed.
I use TCP over Windows Mobile Device Center(since I use Vista).
It just occurred to me, (this is a wild guess and probably not likely) but maybe you're having a delayed ack problem due to your send buffer being smaller than the size of the data you're writing. Nagle may have nothing to do with it.
Does the receiving side send any data back immediately? If not, your peer will delay it's ack for up to 200ms waiting to piggy back it's ack on some data to make better use of bandwidth.
When the send buffer on the socket is smaller than the data in this case the call to write will block until the ack has been received and all the data sent.
For example if your send buffer is 8192 bytes and you send 8193 bytes and your peer sends no data back then your write will block for 200ms ( or however long your peers implementation delays the ack) effectively making it look like Nagle is killing you even when it's disabled.
If this is the case you could either increase the send buffer size or have your peer always send you back a null byte to force the ack to be sent immediately.
Otherwise, I would maybe try playing around with NTttcp_x86 a bit to model your applications send / receive patterns and see if maybe something else is going on.