FTP Client: Connection Reset, Can't Close Connection Until Application Quits - c++

Overview
In c++ (gcc-c++ 6.3.1) on Fedora 24 x86_64 using the standard sockets API, I am making a basic FTP client based on a select few components of RFC-959. I am running into the occasional problem that when receiving or sending data to a server, the data connection gets reset. I have no trouble closing the socket and continuing execution in the case of a data read, but my program trips up quite seriously when performing a write (such as with put/STOR).
There is a lot of code in this program so far, so I will include the most relevant parts in my answer and attempt to abstract the rest. If additional segments of code are needed, I will add them by request.
Detailed Process
Assume that the control connection is already functional and a user is authenticated.
I query passive mode to the server (PASV).
I receive passive response (227).
I open the data connection given the response's IP and port number.
I query a store request to the server (STOR)
I receive store approval (150)
I call send() for the first time, sending some of the data, but -1 is returned.
The errno value is verified equal to ECONNRESET
I attempt to close the socket
I read the control connection for the confirmation from server (226)
Observations
When attempting to close the connection in step 8 above, I have tried a few different things.
shutdown(fileDescriptor, [any flag])
returns -1, ENOTCONN/107, Transport endpoint is not connected.
close(fileDescriptor) returns 0 for success (occasionally I have seen ECONNRESET here too).
This behavior does not change when setting SO_LINGER to immediately close. In fact, the most interesting part is that after the server sends an Rst packet to me, my client never sends its own Rst or Fin until I terminate the program.
Enabling TCP_NODELAY to disable Nagle's algorithm doesn't change anything.
Packet Watch: Standard FTP Client
This is observing packets transferred on the standard FTP client that comes with most unix operating systems.
Packet Direction Purpose
ftp: PASV --> Client prompt for passive mode
ftp: 227 <-- Server approve passive mode
tcp: Syn --> Client initiate handshake for data connection
tcp: Syn+Ack <-- Server acknowledge handshake
tcp: Ack --> Client acknowledge acknowledgement
ftp: STOR --> Client prompt to store file
ftp: 150 <-- Server approve of store request
tcp: Rst+Ack <-- Server warns that connection is closed/reset
tcp: Fin+Ack --> Client closes connection
tcp: Ack --> Client acknowledges server
ftp: 226 <-- Server sends response that file upload done
Differences On My Client
If you observe the same transaction from my client, the Fin+Ack or Rst packets is never sent when close() or shutdown() is called.
After this data connection fails to close, the control connection refuses to send any more responses, which causes all future commands to hang up. But once I close the program, resets are sent on all open connections, and the connections close.
Does anyone know of any obvious troubles I might be running into when trying to terminate the data connection? Why doesn't shutdown() or close() actually close the socket, and how can I force it to not return until it succeeds in doing so?

Related

How to detect a connection failure in Indy TCP Client

I have made a client and a server using Indy TIdTCPClient and TIdTCPServer in C++Builder 11 Alexandria.
I can start the server and connect the client to it correctly, but if I set the server MaxConnections to a value N and I try to connect to it with the N+1 client, the connection does not fail, apparently.
For example: I set MaxConnections=1 in the server, the first client connects to it and the server OnConnect event is raised, while in the client OnStatus event I get two messages:
message 1: Connecting to 10.0.0.16.
message 2: Connected.
I try to connect the second client: the server OnConnect event is NOT raised (and this is what I expect) but in the client OnStatus event I get the same two messages (and this is not what I expect):
message 1: Connecting to 10.0.0.16.
message 2: Connected.
Then, the first client can exchange data with the server, and the second client can't (this seems right).
I don't understand why the second client connection does not fail explicitly, am I doing something wrong?
You are not doing anything wrong. This is normal behavior for TIdTCPServer.
There is no cross-platform socket API at the OS level 1 to limit the number of active/accepted connections on a TCP server socket, only to limit the number of pending connections in the server's backlog. That limit is handled by the TIdTCPServer::ListenQueue property, which is 15 by default (but this is more of a suggestion than a hard limit, the underlying socket stack can override this, if it wants to).
As such, the TIdTCPServer::MaxConnections property is implemented by simply accepting any client from the backlog that attempts to connect, and then immediately disconnects that client if the MaxConnections limit is exceeded.
So, if you try to connect more clients to TIdTCPServer than MaxConnections allows, those extra clients will not see any failure in connecting (unless the backlog fills up), but the server will not fire the OnConnect event for them. From the clients' perspectives, they actually did connect successfully, they were fully accepted by the server's underlying socket stack (the TCP 3way handshake is complete). However, they simply will not process the disconnect until they try to actually communicate with the server, then they will detect the disconnect, usually in the form of an EIdConnClosedGracefully exception (but that is not guaranteed).
1: on Windows only, there is a WSAAccept() function which has a callback that can reject pending connections before they leave the backlog queue. But Indy does not make use of this callback at this time.
Different TCP stacks exhibit different behavior. Your description is consistent with a TCP stack that simply ignores SYNs to a socket that has reached the maximum configured limit of pending and/or accepted connections: the SYN packet is simply dropped on the floor and not acknowledged.
The nature of TCP is that it's supposed to handle network drops. The sender does not immediately bail out, but will keep trying to connect, for some period of time. This part is consistent with all TCP implementations.
If you want your client to quickly fail a connection that does not get established within some set period of time you'll need to implement a manual timeout yourself.

Strange behaviour of TCP-connection

We have the following configuration: Server, Client, DHCP-Server. The server runs on a static IP in "client mode". This means, that server has a list of all clients (hostnames) and builds TCP-connections. Clients have dynamic IPs.
How does it working: 1) Server creates a connection to a client and 2) Server waits for any data from a client (we use ACE framework and reactor-pattern). To have the list with clients up to date, we have added an additional timer, that sends a heartbeat to all clients.
And there is one strange behavior:
let's say a hostname "somehost.internal" has IP "10.10.100.50"
Time = t: connect to hostname "somehost.internal"
Time = t+1: change IP of the client to "10.10.100.60"
Time = t+2: heartbeat timer send data to an existing endpoint (10.10.100.50) and successfully returns (why??? this IP is not accessible)
in Wireshark I can see, that Retransmission packages
Time = t+5: some seconds later event handler returns with an error and the connection to the endpoint (10.10.100.50) will be closed
Do you have any advice, why a blocking send-function successfully returns when remote endpoint does not exist anymore?
I assume in step 3 you only send a heartbeat message to the client but do not actually wait for a response from the client on that heartbeat message.
The socket send() function only forwards the data to the OS. It does not mean that the data is actually transmitted or has been received by the peer.
The OS buffers data, transmits data over the network, waits for acknowledgements, retransmits data, etc.. This all takes time. Eventually the OS will decide that the other end no longer responds and marks the connection as invalid.
Only then when you perform a socket function on that connection is the application notified of any issues.

TCP - What if client call close() before server accept()

In C/C++, if client and server finished 3-way handshake and this connection was sitting in server's backlog (listening queue). Before server calls accept(), what gonna happen if client calls close(). Will this connection get removed from backlog?
The real world situation is that, server sometimes is too busy to accept every connection immediately. So there will be some connections waiting in backlog. The client has a timeout for the first response from server. If the timeout happens, it will call close() then retry or whatever. At this moment, I am wondering if the backlog of server will remove the connection from backlog.
Please share your idea. Appriciate it!
Generally speaking, if a client calls close(), the clients protocol stack will send a FIN to indicate that the client is done sending, and will wait for the server to send a FIN,ACK back to the client (which won't happen before the server accepts the connection, as we shall see), and then the client will ACK that. This would be a normal termination of a TCP connection.
However, since a TCP connection consists of two more or less independent streams, sending a FIN from the client really is only a statement that the client is done sending data (this is often referred to as "half closed"), and is not actually a request at the TCP protocol level to close the connection (although higher level protocols often will interpret it that way, but they can only do so after the connection has been accepted and they have had a read return 0 bytes in order to learn that the client is done writing). The server can still continue to send data, but since the client has called close(), it is no longer possible for this data to be delivered to the client application. If the server sends further data, the protocol stack on the client will respond with a reset, causing an abnormal termination of the TCP connection. If the client actually wished to continue receiving data from the server after declaring that it was done sending data, it should do so by calling shutdown(sock,SHUT_WR) rather than calling close().
So what this means is that the connections that time out and that are normally closed by clients will generally remain active at the server, and the server will be able to accept them, read the request, process the request, and send the reply and only then discover that the application can no longer read the reply when the reset is returned from the client. The reason I say "generally" is that firewalls, proxies, and OS protocol stacks all place limits on how long a TCP connection can remain in a half closed state, generally in violation of the relevant TCP RFCs but for "valid" reasons such as dealing with DDOS.
I think your concern is that a server that is overloaded will be further overloaded by clients timing out and retrying, which in my view is correct based on my preceding explanation. In order to avoid this, a client timing out could set SO_LINGER to 0 prior to calling close() which would cause a reset to be sent to cause an immediate abnormal termination. I would also suggest using an exponential back-off on timeout to further mitigate the impact on an overloaded server.
Once the 3way handshake is complete, the connection is in an ESTABLISHED state. On the client side, it can start sending data immediately. On the server side, the connection is placed in a state/queue that accept() can then pull from so the application can use the connection (see How TCP backlog works in Linux).
If the server doesn't accept() the connection, the connection is still ESTABLISHED, it's inbound buffer will simply fill up with whatever data the client sends, if any.
If the client disconnects before accept() is called, then the connection still enters the CLOSED state, and will be removed from the queue that accept() pulls from. The application will never see the connection.

KEEP_ALIVE still sending packets to server after network cable or server disconnected

I have simple client server application, which I have downloaded from MSDN. I have used WSAIoCtl function to send keep-alive packets to the server(), which will eventually throw WSACONNRESET if the server does not respond upon maximum retries done by a client application. I can see keep alive packets (in Wireshark tool) while both are connected. Now if I unplugged the cable I can still see those keep alive packets being sent to the server and received at the client side. The process goes on indefinite time. I am unable to understand how this could happen, as my server is no longer attached to network still connection stays in ESTABLISHED state.
I am wondering how this could be possible. If I close my server while being connected to the client, I can see keep alive packets running on WireShark. There is no way I can detect a broken connection.
I receive FIN/ACK from server while I closing server application, still keep alive continues to run.
Kindly help me out.

How to properly catch the initial response of a server?

My C++ program is trying to check status of a ftp-server. It uses winsock and simple testing function that looks like this (pseudocode):
create a tcp socket
connect to port 21 of the server
do recv() while data is available
close socket
return received data
It doesn't send any data to the server with send() - just trying to catch server's initial response. This actually works and returns "200 Response Server Ready" - it is exactly what I need. But after 2nd run of the same function (immediately after 1st run) it returns nothing (because recv() returns -1). Wireshark tells me the server really didn't send a response. After that I placed a half-second pause between the testing function calls and now it works, but this solution is unwanted.
The question is: How to properly catch the server's (not only ftp, but any other too) initial response?
I think you don't need to catch answer of server. If your program connects to some port, then some server listening to this port and is OK. So just open connection and check socket variable.
PS: A lot of servers don't do anything until you send data to them (not sure about FTP), you have to send something to get answer. But to have status succesfull connection is enough.