Recently I began working with the Boost::Asio library (C++). I'm looking for a way to send a TCP SYN message to a end destination. However I can't find any way of doing this, does somebody knows a way to accomplish it?
The TCP stack usually deals with this, not your code. If you just call boost::asio::ip::tcp::socket::connect() on an appropriately constructed instance, you will cause a TCP SYN packet to be sent, along with the rest of the TCP handshake and session handling.
Update:
If you want to implement TCP yourself you will need to deal with more than just a TCP SYN, otherwise you're just writing code to attack systems with half-open connections. You need a raw socket and you need to construct the contents of the packet yourself. If you are doing this you should be able to RTFM to find out more.
Related
I'm trying to get the TCP header of a TCP connection in C++11. Reading through already existing StackOverflow questions (here, here, here and here) it seems like I have to open a RAW_SOCKET or to write a Linux Kernel Module (LKM) to have access to it.
From what I've understood, opening a raw socket means handling the whole TCP protocol (handshake, window size, etc...). Is there a way to obtain the TCP header and let the kernel manage the TCP protocol (either "by hand" or with some framework)?
I know I could use libpcap for capturing the packets, but this would mean for my application making somehow a match from the incoming packet in the TCP socket and the captured packet from libpcap. While this is a possible solution, it'd be a cumbersome one (and I wouldn't like to do that).
Any help is appreciated, thank you!
A "quick and dirty" approach might be using two connections, an external connection to the remote host and a pure internal one. Sure, this won't be the most efficient approach, but is easy (and fast) to implement (the core feature of QAD "solutions"...):
socket ext_raw ------- socket remote, TCP (likely, at least)
socket int_raw ---
| (loop back connection)
socket int_tcp ---
Any incoming messages at ext_raw and int_raw are just forwarded from one to the other (while incoming messages on ext_raw can be inspected for TCP headers), whereas all the normal TCP handling is done by the internal TCP socket. So in a way, you'll be tunneling the TCP connection through your two raw sockets...
As a homework assignment, I wrote UDP server-client application that tries to correct errors in the UDP communication using checksums and through confirming correctly received packets.
The problem is that on localhost, all packets are received without a problem. I tried some packet tampering programs, but they all require communication through network interface.
How to simulate UDP packet loss on localhost loopback address?
UDP is easy to deal with--just write a bit of code in the sender or receiver which drops a certain percentage of the messages, and perhaps occasionally reorders some too.
If you can't modify the actual sender or receiver, it is easy enough to write a third program which simply sits in the middle, forwarding packets with some drops and reordering.
If you're using Linux, you can probably set up iptables to drop packets for you: http://code.nomad-labs.com/2010/03/11/simulating-dropped-packets-aka-crappy-internets-with-iptables/ - this seems like it might work even on loopback ports.
I have a server application written in C++. When a client connects, it creates a new thread for him. In that thread there is a BLOCKING reading from a socket. Because there is a possibility for a client to accidentally disconnect and left behind a thread still hanging on the read function, there is a thread that checks if the sockets are still alive by sending "heartbeat messages". The message consists of 1 character and is "ignored" by the client (it is not processed like other messages). The write looks like this:
write(fd, ";", 1);
It works fine, but is it really necessary to send a random character through the socket? I tried to send an empty message ("" with length 0), but it didn't work. Is there any better way to solve this socket checking?
Edit:
I'm using BSD sockets (TCP).
I'm assuming when you say, "socket, you mean a TCP network socket.
If that's true, then the TCP protocol gives you a keepalive option that you would need to ask the OS to use.
I think this StackOverflow answer gets at what you would need to do, assuming a BSDish socket library.
In my experience, using heartbeat messages on TCP (and checking for responses, e.g. NOP/NOP-ACK) is the easiest way to get reliable and timely indication of connectivity at the application layer. The network layer can do some interesting things but getting notification in your application can be tricky.
If you can switch to UDP, you'll have more control and flexibility at the application layer, and probably reduced traffic overall since you can customize the communications, but you'll need to handle reliability, packet ordering, etc. yourself.
You can set connection KEEPALIVE. You may have interests in this link: http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/overview.html
It is ok you create a thread for each new coming requests if it is only toy. In most of time, i use poll, that is non-blocking io, for performance improvement.
This question already has an answer here:
Closed 11 years ago.
Possible Duplicate:
C++ Functions According to TCP
In my windows C++ application I'm using winsock API.
I want to detect network errors in my C++ functions.
Using wireshark I can see that after there is a network error there are TCP retransmission packets.
Do you know how can I detect TCP retransmission timeouts with C++ functions?
Basically, no way. Sockets API just do not give you such low-level information. You can only detect total connection failure.
If you want EXACTLY what you asking for, you have to capture network packets and do flow analysis like in wireshark. Otherwise, please clarify why do you want to detect this. May be tcp keepalive or udp will suffice.
If the connection is broken, all calls to recv (or WSARecv) will return an error. TCP itself have retransmission of packets built into the protocol, so you don't really have to do anything in most cases.
If the cable between the two peers is broken some way, then you won't get an error when receiving though. Then you have to implement your own timeout. If your higher-level protocol is using request-response (i.e. you send a request and the other peer returns a response) it is easy, if no response is received within X seconds, then close the connection and reconnect.
Edit In response to the comments:
TCP has this retransmission built-in, there is no way to turn it off, or get an error after the first timeout. One way to solve this is to use UDP (SOCK_DGRAM) sockets instead. The problem with this is that you have to take care of everything yourself, including handling timeouts if there are no responses.
I am using SDL and Net2 lib for a client-server application.
The problem I am facing is that I am not receiving all of my TCP packets from my client unless I place a delay before sending each packet from client.
Removing the delay I get only one packet.
A TCP connection is a stream of bytes. Your client could send 20 packets of 5 bytes each, and the server read it as one 100-byte sequence. You'll need to split the data up yourself.
Well you're not guaranteed (in regular sockets) to receive all packets at one time, you may have to call your receive function more than once, to receive all data. This is of course depends on your definition of a "packet" are you receiving all of your data?
+1 erik
Although it is not guaranteed to be reliable, you most likely want to use UDP, not TCP. Net2 handles UDP very well. UDP is actually very reliable. UDP is message oriented. UDP messages tend to get sent quickly and get special treatment by routers (not always a good thing :-). UDP is often used in games.
BTW, if you had asked this question on the SDL mailing list, or sent it to me directly, you would have gotten this advice many months ago.
I wrote Net2 and I hang out on the SDL list. I do not hang out here because this place is an infinite time sink.
Bob Pendleton