TCP retransmission Timeout Detection in C++ [duplicate] - c++

This question already has an answer here:
Closed 11 years ago.
Possible Duplicate:
C++ Functions According to TCP
In my windows C++ application I'm using winsock API.
I want to detect network errors in my C++ functions.
Using wireshark I can see that after there is a network error there are TCP retransmission packets.
Do you know how can I detect TCP retransmission timeouts with C++ functions?

Basically, no way. Sockets API just do not give you such low-level information. You can only detect total connection failure.
If you want EXACTLY what you asking for, you have to capture network packets and do flow analysis like in wireshark. Otherwise, please clarify why do you want to detect this. May be tcp keepalive or udp will suffice.

If the connection is broken, all calls to recv (or WSARecv) will return an error. TCP itself have retransmission of packets built into the protocol, so you don't really have to do anything in most cases.
If the cable between the two peers is broken some way, then you won't get an error when receiving though. Then you have to implement your own timeout. If your higher-level protocol is using request-response (i.e. you send a request and the other peer returns a response) it is easy, if no response is received within X seconds, then close the connection and reconnect.
Edit In response to the comments:
TCP has this retransmission built-in, there is no way to turn it off, or get an error after the first timeout. One way to solve this is to use UDP (SOCK_DGRAM) sockets instead. The problem with this is that you have to take care of everything yourself, including handling timeouts if there are no responses.

Related

Is it possible for sniffing with 100% TCP packets detection with socket RAW_PACKET option in C?

I am looking into have a custom sniffing application which detects TCP packets. But I see some of the packets are lost, meaning some of the packets are not captured by the application.
I am looking for the clarifications on the below questions,
Is it possible to write a sniffing application in C which detects 100% of TCP packets without losing any single packet using socket RAW_PACKET option ?
Any specific design considerations to think of ?
FYI. I dont use multi-threading here. The application mostly deals with I/O.
Any reference docs / links / books that you think will help me here ?

How to detect internet disconnection on SSL/TCP sockets which were connected before? [duplicate]

This question already has answers here:
Howto detect that a network cable has been unplugged in a TCP connection?
(3 answers)
Closed 6 years ago.
I am having a thread which blocks on select() based SSL_read(). The main thread writes whenever needed using SSL_write(). During unit testing, I found a problem:
Client TCP socket connect() to server on SSL(TLS)
After sometime, disable internet
Write some data on client TCP/SSL socket
Even without interenet, the SSL_write() returns the correct number of bytes written, instead of 0 or some error. Such interenet disconnections are arbitrary, neither too high nor too low.
My expectation is that, whenever there is an internet disconnection, the socket should generate some interesting event, which is detectable and I will discontinue the socket + SSL connections.
In case I have to establish some client-server hand made protocol, then that's possible.
What is the best way to achieve such internet detection?
Expecting a solution with lesser CPU cycles & lesser client server communication. I am sure that, this is not a very special problem and hence must have been solved before.
[P.S.: The client sockets are opened on mobile platforms like Android & iOS, however I would like to know a general solution.]
This is general problem with sockets. You can't have a certified delivery with them. If you want to know if the counterparty is indeed reachable, you have to implement some sort of hearbeats yourself.
Generally successful write to the socket is no indication of availability of the recepient on the other end.

Boost Asio UDP Daytime Server Async receive

I've been learning boost asio recently, especially UDP. I am familiar with the basics, but had a question regarding how UDP handles incoming messages. In the tutorial (see source code here: http://www.boost.org/doc/libs/1_54_0/doc/html/boost_asio/tutorial/tutdaytime6/src.html), the UDP server operates something like (very pseudo-code):
startReceive(){
async_receive(boost::bind(handler),...other params);
}
handler(){
doStuffToDataReceived();
startReceive(); //start the receiving process over again to allow it to receive more data
}
My question is, if data arrives to the server during the time that it is in "doStuffToDataReceived()", before it startReceives over again, does that data get lost, or does it sit there and wait for startReceive to happen again and then is immediately retrieved?
Thanks!
UDP stack has a buffer, so the data in the above example wouldn't get lost.
Note however, that UDP is allowed to drop packets under various circumstances. So, as the throughput of the UDP server grows, the timings of doStuffToDataReceived might become much more critical.

Sending TCP SYN packet with Boost::Asio

Recently I began working with the Boost::Asio library (C++). I'm looking for a way to send a TCP SYN message to a end destination. However I can't find any way of doing this, does somebody knows a way to accomplish it?
The TCP stack usually deals with this, not your code. If you just call boost::asio::ip::tcp::socket::connect() on an appropriately constructed instance, you will cause a TCP SYN packet to be sent, along with the rest of the TCP handshake and session handling.
Update:
If you want to implement TCP yourself you will need to deal with more than just a TCP SYN, otherwise you're just writing code to attack systems with half-open connections. You need a raw socket and you need to construct the contents of the packet yourself. If you are doing this you should be able to RTFM to find out more.

SDL Net2 Missing TCP packets

I am using SDL and Net2 lib for a client-server application.
The problem I am facing is that I am not receiving all of my TCP packets from my client unless I place a delay before sending each packet from client.
Removing the delay I get only one packet.
A TCP connection is a stream of bytes. Your client could send 20 packets of 5 bytes each, and the server read it as one 100-byte sequence. You'll need to split the data up yourself.
Well you're not guaranteed (in regular sockets) to receive all packets at one time, you may have to call your receive function more than once, to receive all data. This is of course depends on your definition of a "packet" are you receiving all of your data?
+1 erik
Although it is not guaranteed to be reliable, you most likely want to use UDP, not TCP. Net2 handles UDP very well. UDP is actually very reliable. UDP is message oriented. UDP messages tend to get sent quickly and get special treatment by routers (not always a good thing :-). UDP is often used in games.
BTW, if you had asked this question on the SDL mailing list, or sent it to me directly, you would have gotten this advice many months ago.
I wrote Net2 and I hang out on the SDL list. I do not hang out here because this place is an infinite time sink.
Bob Pendleton