How to check socket connection after every minute in C++ - c++

I have implemented a design in C++ that is:-
I parse an XML file which contain the ip and port of several servers,for each ip and port i first call the connect to server function in which i make a TCP socket connection with the server whether the connection is established or not i make a thread for each ip and port,if connection is not established i send the status of the server that connection is not established and if connection is established i then make a request to the server and receive response from the server in the thread.This is done after every minute for each thread.
Now the problem that i am facing is,if connection terminates or if the server power goes,how to again make a connection i mean after every one minute before sending request and receiving response from the server i have to check whether the connection is still there or not.
Can you please tell me how to do that?

You could have whatever it is that checks the connection status also do all of the communication for your other classes. Other classes pass a function pointer to it and in that method you test the connection, and if it's successful run the function you've been passed. If not stick it on a queue to be run once the connection is re-established.

Related

How to detect a connection failure in Indy TCP Client

I have made a client and a server using Indy TIdTCPClient and TIdTCPServer in C++Builder 11 Alexandria.
I can start the server and connect the client to it correctly, but if I set the server MaxConnections to a value N and I try to connect to it with the N+1 client, the connection does not fail, apparently.
For example: I set MaxConnections=1 in the server, the first client connects to it and the server OnConnect event is raised, while in the client OnStatus event I get two messages:
message 1: Connecting to 10.0.0.16.
message 2: Connected.
I try to connect the second client: the server OnConnect event is NOT raised (and this is what I expect) but in the client OnStatus event I get the same two messages (and this is not what I expect):
message 1: Connecting to 10.0.0.16.
message 2: Connected.
Then, the first client can exchange data with the server, and the second client can't (this seems right).
I don't understand why the second client connection does not fail explicitly, am I doing something wrong?
You are not doing anything wrong. This is normal behavior for TIdTCPServer.
There is no cross-platform socket API at the OS level 1 to limit the number of active/accepted connections on a TCP server socket, only to limit the number of pending connections in the server's backlog. That limit is handled by the TIdTCPServer::ListenQueue property, which is 15 by default (but this is more of a suggestion than a hard limit, the underlying socket stack can override this, if it wants to).
As such, the TIdTCPServer::MaxConnections property is implemented by simply accepting any client from the backlog that attempts to connect, and then immediately disconnects that client if the MaxConnections limit is exceeded.
So, if you try to connect more clients to TIdTCPServer than MaxConnections allows, those extra clients will not see any failure in connecting (unless the backlog fills up), but the server will not fire the OnConnect event for them. From the clients' perspectives, they actually did connect successfully, they were fully accepted by the server's underlying socket stack (the TCP 3way handshake is complete). However, they simply will not process the disconnect until they try to actually communicate with the server, then they will detect the disconnect, usually in the form of an EIdConnClosedGracefully exception (but that is not guaranteed).
1: on Windows only, there is a WSAAccept() function which has a callback that can reject pending connections before they leave the backlog queue. But Indy does not make use of this callback at this time.
Different TCP stacks exhibit different behavior. Your description is consistent with a TCP stack that simply ignores SYNs to a socket that has reached the maximum configured limit of pending and/or accepted connections: the SYN packet is simply dropped on the floor and not acknowledged.
The nature of TCP is that it's supposed to handle network drops. The sender does not immediately bail out, but will keep trying to connect, for some period of time. This part is consistent with all TCP implementations.
If you want your client to quickly fail a connection that does not get established within some set period of time you'll need to implement a manual timeout yourself.

TCP - What if client call close() before server accept()

In C/C++, if client and server finished 3-way handshake and this connection was sitting in server's backlog (listening queue). Before server calls accept(), what gonna happen if client calls close(). Will this connection get removed from backlog?
The real world situation is that, server sometimes is too busy to accept every connection immediately. So there will be some connections waiting in backlog. The client has a timeout for the first response from server. If the timeout happens, it will call close() then retry or whatever. At this moment, I am wondering if the backlog of server will remove the connection from backlog.
Please share your idea. Appriciate it!
Generally speaking, if a client calls close(), the clients protocol stack will send a FIN to indicate that the client is done sending, and will wait for the server to send a FIN,ACK back to the client (which won't happen before the server accepts the connection, as we shall see), and then the client will ACK that. This would be a normal termination of a TCP connection.
However, since a TCP connection consists of two more or less independent streams, sending a FIN from the client really is only a statement that the client is done sending data (this is often referred to as "half closed"), and is not actually a request at the TCP protocol level to close the connection (although higher level protocols often will interpret it that way, but they can only do so after the connection has been accepted and they have had a read return 0 bytes in order to learn that the client is done writing). The server can still continue to send data, but since the client has called close(), it is no longer possible for this data to be delivered to the client application. If the server sends further data, the protocol stack on the client will respond with a reset, causing an abnormal termination of the TCP connection. If the client actually wished to continue receiving data from the server after declaring that it was done sending data, it should do so by calling shutdown(sock,SHUT_WR) rather than calling close().
So what this means is that the connections that time out and that are normally closed by clients will generally remain active at the server, and the server will be able to accept them, read the request, process the request, and send the reply and only then discover that the application can no longer read the reply when the reset is returned from the client. The reason I say "generally" is that firewalls, proxies, and OS protocol stacks all place limits on how long a TCP connection can remain in a half closed state, generally in violation of the relevant TCP RFCs but for "valid" reasons such as dealing with DDOS.
I think your concern is that a server that is overloaded will be further overloaded by clients timing out and retrying, which in my view is correct based on my preceding explanation. In order to avoid this, a client timing out could set SO_LINGER to 0 prior to calling close() which would cause a reset to be sent to cause an immediate abnormal termination. I would also suggest using an exponential back-off on timeout to further mitigate the impact on an overloaded server.
Once the 3way handshake is complete, the connection is in an ESTABLISHED state. On the client side, it can start sending data immediately. On the server side, the connection is placed in a state/queue that accept() can then pull from so the application can use the connection (see How TCP backlog works in Linux).
If the server doesn't accept() the connection, the connection is still ESTABLISHED, it's inbound buffer will simply fill up with whatever data the client sends, if any.
If the client disconnects before accept() is called, then the connection still enters the CLOSED state, and will be removed from the queue that accept() pulls from. The application will never see the connection.

Why my boost async TCP server connection accept handler stops working after some time?

I have created simple TCP server which accepts connections and request and do some processing. I have referred following example and it is working fine. I send data to to connected client continuously and it is being printed on client side. Though after around period of 20-25 minutes, the client stops receiving any data. Also after such incident, the server shows running but now when I connect my client again to server, the server's connection accept handler doesnt get invoked. But I am able to telnet to the server's port and client is able to connect. Any idea what might be the problem?

How to properly catch the initial response of a server?

My C++ program is trying to check status of a ftp-server. It uses winsock and simple testing function that looks like this (pseudocode):
create a tcp socket
connect to port 21 of the server
do recv() while data is available
close socket
return received data
It doesn't send any data to the server with send() - just trying to catch server's initial response. This actually works and returns "200 Response Server Ready" - it is exactly what I need. But after 2nd run of the same function (immediately after 1st run) it returns nothing (because recv() returns -1). Wireshark tells me the server really didn't send a response. After that I placed a half-second pause between the testing function calls and now it works, but this solution is unwanted.
The question is: How to properly catch the server's (not only ftp, but any other too) initial response?
I think you don't need to catch answer of server. If your program connects to some port, then some server listening to this port and is OK. So just open connection and check socket variable.
PS: A lot of servers don't do anything until you send data to them (not sure about FTP), you have to send something to get answer. But to have status succesfull connection is enough.

C++ Send Return SOCKET_ERROR

In my C++ application I'm using network connection (TCP).
when I'm identify a network connection error I'm trying to connect to another interface.
in the reconnection the connect function has passed with no error but on send function it return an SOCKET_ERROR and WSGetLastError return 10054.
do you know what is the meaning of this error and what should I do to resolve it?
10x
10054 means connection reset by peer -- the remote endpoint replied with an RST packet to tell you that the connection isn't open. Reconnect with connect() instead of trying to simply change interfaces on your local end.
10054 (connection reset by peer) after successfull connect() means that the server accepts incoming connection but after that it closes the accepted socket without waiting for incoming information. The only way to resolve this is to check the server application logic.