Is there a way to programmatically close a TCP connection in C/C++ ungracefully on Linux?
For testing, I would like to simulate the case when an endpoint is just powered down without transmitting any FIN and/or RST, and neither flushing any buffers.
If you're opening a socket via socket(AF_INET, SOCK_STREAM, 0); then its unpredictable to implement an ungraceful shutdown sense even killing the process will still give the kernel domain over closing the socket.
You can either externally block network access to that socket (ie ifdown, iptables), which you can call from your program. Or you'll have to implement custom TCP code using socket(AF_INET, SOCK_RAW, 0); so that the kernal doesn't try to close it gracefully (as it wouldn't know it's even a TCP socket).
The easiest way to simulate the machine with the connection "going away" without notice is probably to block(drop) the incoming packets on the connection. A netfilter command like
iptables -A INPUT -p tcp --dport $PORT -j DROP
will do that -- replace $PORT with the connections local port number. The won't block outgoing packets (which may be an issue if you have keepalives or periodically send stuff.) You can do that with
iptables -A OUTPUT -p tcp --sport $PORT -j DROP
Be sure to remove these filtering rules when you're done, as otherwise they'll stick around and cause problems.
I would prefer To act on the net driver irq processing.
The Linux kernel provides a way to set the affinity of an IRQ to specific CPUs, so that the IRQ can only be processed by those CPUs. By setting the affinity of an IRQ to an empty set of CPUs, you can effectively disable the IRQ.
disable irqbalance daemon . then check which interrupts your nic driver uses .
cat /proc/interrupts
then forbids any cpu To process the nic irq :
echo 0 > /proc/irq/N/smp_affinity
You can use the close function from the unistd.h library to close the file descriptor associated with the socket. However, it is important to note that this method may not immediately terminate the connection, as the underlying TCP protocol has its own mechanisms for closing connections, and these mechanisms may take some time to complete.
Related
I tried the official tcp echo server example server and client. With netstat -ano | findstr TIME_WAIT I can see the client causes a TIME_WAIT every time, while the server disconnects cleanly.
Is there anyway to prevent the TIME_WAIT or CLOSE_WAIT, that disconnect cleanly for both sides?
Here's the captured packets, seems the last ACK is sent correctly, but still there is TIME_WAIT on client side.
CLOSE_WAIT is a programming error. The local application has received an incoming close but hasn't closed this end.
TIME_WAIT comes after a clean disconnect by both parties, and it only lasts a few minutes. The way to avoid it is to be the end that receives the first close. Typically you want to avoid it at the server, so you have the client close first.
A long lingering CLOSE_WAIT is really a programming error (the OS performs the connection shutdown, but your application doesn't remember to free the socket in a timely manner -- or at all).
TIME_WAIT, however, is not really an exceptional condition. It is necessary to provide a clean closing on a connection that might have lost the very last ACK segment during the normal connection shutdown. Without it, the retransmition of the FIN+ACK segment would be responded with a connection reset, and some sensitive applications might not like it.
The most common way to have a smaller number of sockets in TIME_WAIT state is to shorten its duration globally, by tuning a global OS-level parameter. IIRC, there is also a way to disable it completely on a single socket through setsockopt() (I don't remember what option, however), but then you might occasionally send possibly unwanted RST segments to peers that lose packets during connection shutdown.
As to why you see them only on one of the sides of the connection, it is probably on the side that requested to close the connection first. It is it that sends the first FIN, receives the FIN+ACK, and sends the last ACK. If that last ACK is lost, it will receive the FIN+ACK again, and should resend the ACK, not RST. The other side, however, knows for sure that the connection is completely finished when the last ACK arrives, and then there is no need to wait for anything else on that socket -- if anything arrives to that host with the same pair of address+TCP port endpoints as the just-closed socket, than it should either be a new connection request (in which case a new connection might be opened), or it is some TCP state machine violation (and must be responded to with RST, or maybe some ICMP prohibited message).
I have a systemd .socket paired with an #.service. The socket contains "Accept=yes" in order to accept a TCP connection from a client over the specified port, after which an #.service instance is created which executes my server program to handle the TCP connection. Currently, I am testing with one client (Windows software) connecting to the server (linux c++).
My problem is that for the first client connection attempt, the TCP connection succeeds but there is a long delay (5-10 seconds) before systemd launches the associated #.service. Any subsequent connections will launch the #.service almost immediately, UNLESS a TCP RST packet is received. If a TCP RST packet is received, the next connection again as a 5-10 second delay before the #.service is launched, and the cycle repeats itself.
My .socket file is very simple. For the [Socket] portion, it really just specifies a ListenStream port and Accept=true.
Any ideas what may be causing this delay?
The first thing that is coming to my mind is that systemd itself is not getting enough CPU to accept the connection but maybe that is not what it is since you think it has something to do with TCP RST packet.
You can change the log level to debug on /etc/systemd/system.conf and have more information about when systemd actually accepts the connection.
The way it works is, systemd listens the socket on .socket file and puts an epoll on the file descriptor. As soon as there is an activity on the socket, systemd gets notification in its event loop. Then it accepts the connection and starts the program specified in .service file.
I want to implement a network delay model for TCP/UDP traffic as described in Linux libnetfilter_queue delayed packet problem. I have followed the suggestion of Andy there, copying entire packet to my program and placing it an a priority queue. As time passes, packets in priority queue are removed and dispatched using RAW sockets.
The problem I am facing is this: Initial capture of packets by libnetfilter_queue is being done by matching the ports (sudo iptables -A OUTPUT -p udp --dport 8000 -j NFQUEUE --queue-num 0). When these packets are reinjected by RAW sockets, they are picked up once again by libnetfilter_queue (since the port remains the same) and hence continue to loop forever.
I am really confused and cannot think of a way out. Please help me.
Use skb->mark. It's a marking which only exists within the IP stack of your host. It does not affect anything in the network packet itself.
You can filter it using iptables using the '--mark' filter. Use it to return from your delay chain so that your re-inserted packets are not delayed again.
iptables -A DELAY -m mark --mark 0xE -j RETURN
iptables -A DELAY -j DELAY
You can configure the raw socket to apply a mark, using setsockopt(fd, SOL_SOCKET, SO_MARK, ...). You only need to do this once after opening the socket. The mark value will be automatically applied to each packet you send through the socket.
This probably isn't the best way to do it, but here is one possible solution. You could use the DSCP field in the IP header to differentiate new packets and packets you are re-injecting. Change your iptables rule to only enqueue packets with a DSCP of 0 (see http://www.frozentux.net/iptables-tutorial/iptables-tutorial.html#DSCPMATCH). This assumes when your OS sends a packet, it will set the DSCP to 0. Now all new packets generated by the OS will be sent to your program because they still match the iptables rule. When you are creating a new packet in your program using a RAW socket, set the DSCP value to a non-zero value. When your new packet is re-injected, it will no longer match the iptables rule and will go out over the network.
If you don't want packets going out over the network with DSCP values set, you could add another iptables rule to re-write the dscp values to 0.
I have an application which talks to server over HTTP. I have written a code to control connect timeout (amount of time it will wait before server replies) . But I am finding it hard to generate a test case to test my connect timeout code. Could you please help me.
Basically, TCp handshake will contain
Host A sends a TCP SYNchronize packet to Host B
Host B receives A's SYN
Host B sends a SYNchronize-ACKnowledgement
Host A receives B's SYN-ACK
Host A sends ACKnowledge
Host B receives ACK.
TCP socket connection is ESTABLISHED.
In my application, if server does not complete TCP handshek in x seconds, applications moves to next server. But to test this code, I need a server stub which will probably accept SYN packet from client but will not set SYN+ACK packet to client. Thus making client wait for server's reply.
Could you please help me to create small server stub which will listen to particular port but will not complete handshake.
Given you mentioned RHEL I think you're best off using iptables to help test this. For example you could call:
iptables -I INPUT -s hostb -d hosta -p tcp --dport $port --tcp-flags SYN,ACK SYN,ACK -j DROP
calling that before running the test (or even during it perhaps?) and an equivalent matched -X to delete it seems to be by far the simplest way of breaking a handshake halfway through.
Drop all SYN+ACK (warning, WILL break new SSH connections):
iptables -I INPUT -p tcp --tcp-flags SYN,ACK SYN,ACK -j DROP
Drop all from or to 10.10.22.34:
iptables -I INPUT -s 10.10.22.34 -j DROP
iptables -I OUTPUT -d 10.10.22.34 -j DROP
Personally I would use the most specific match you can possibly write to avoid accidentally breaking remote access or anything else at the same time.
You could get fancier and use the -m owner match to only apply this rule for packets to/from the UID you run this test as.
I wouldn't rely on iptables, or any other tool for unit testing, as those tests would be too brittle. What if the IP address changes, or the unit tests are run on another machine ? What if the code has to be ported on an OS where iptables is not available ?
In order to keep the unit tests isolated from the network, I would encapsulate the socket API in a Socket class. Then I would have a Connection class that uses the Socket class. I would unit test the Connection class with a TimeoutSocket class (derived from Socket) that simulates the server not accepting the first connection request.
Your code should not depend on what's going on on the wire.
How can I create a client UDP socket in C++ so that it can listen on a port which is being listened to by another application? In other words, how can I apply port multiplexing in C++?
I want to listen on only one port
You can do that with a sniffer. Just ignore the packets from different ports.
I might need to stop it from sending out some particular packets, because my program will send it instead of the original application
Okay, here I suggest you to discard sniffers, and use a MITM technique.
You'll need to rely on a PREROUTING firewall rule to divert the packets to a "proxy" application. Assuming UDP, Linux, iptables, and the "proxy" running on the same host, here's what the "proxy" actually needs to do:
1. Add the firewall rule to divert the packets (do it manually, if you prefer):
iptables -t nat -A PREROUTING -i <iface> -p <proto> --dport <dport>
-j REDIRECT --to-port <newport>
2. Bind and listen on <newport>.
3. Relay all the traffic between the 2 endpoints (client, and original destination). If you're running the "proxy" on a different host, use getsockopt with SO_ORIGINAL_DST to retrieve the original destination address.
It might sound tricky, but... yeah, that's because it's a bit tricky :-)
Consult your firewall documentation if my assumption diverges.
This is just packet sniffing like tcpdump or snoop, open up a raw socket and pull everything from the wire and filter as you require. You will probably want to use libpcap to make things a little easier.
Without administrator or super-user privileges you will need the target application to open ports with SO_REUSEADDR and SO_REUSEPORT as appropriate for the platform. The caveat being you can only receive broadcast and multicast packets, unicast packets are delivered to the first open socket.
This is not multiplexing - that term is reserved for handling I/O on multiple channels in the same process and where things like select(2) and poll(2) are most useful.
What you are asking for is multicast. Here is the basic example.
Note that IP reserves a special range of addresses (a.k.a. groups) for multicasting. These get mapped to special ethernet addresses. The listener(s) would have to join the multicast group, while sender does not have to, it just sends as usual.
Hope this helps.