We have the following configuration: Server, Client, DHCP-Server. The server runs on a static IP in "client mode". This means, that server has a list of all clients (hostnames) and builds TCP-connections. Clients have dynamic IPs.
How does it working: 1) Server creates a connection to a client and 2) Server waits for any data from a client (we use ACE framework and reactor-pattern). To have the list with clients up to date, we have added an additional timer, that sends a heartbeat to all clients.
And there is one strange behavior:
let's say a hostname "somehost.internal" has IP "10.10.100.50"
Time = t: connect to hostname "somehost.internal"
Time = t+1: change IP of the client to "10.10.100.60"
Time = t+2: heartbeat timer send data to an existing endpoint (10.10.100.50) and successfully returns (why??? this IP is not accessible)
in Wireshark I can see, that Retransmission packages
Time = t+5: some seconds later event handler returns with an error and the connection to the endpoint (10.10.100.50) will be closed
Do you have any advice, why a blocking send-function successfully returns when remote endpoint does not exist anymore?
I assume in step 3 you only send a heartbeat message to the client but do not actually wait for a response from the client on that heartbeat message.
The socket send() function only forwards the data to the OS. It does not mean that the data is actually transmitted or has been received by the peer.
The OS buffers data, transmits data over the network, waits for acknowledgements, retransmits data, etc.. This all takes time. Eventually the OS will decide that the other end no longer responds and marks the connection as invalid.
Only then when you perform a socket function on that connection is the application notified of any issues.
Related
I have made a client and a server using Indy TIdTCPClient and TIdTCPServer in C++Builder 11 Alexandria.
I can start the server and connect the client to it correctly, but if I set the server MaxConnections to a value N and I try to connect to it with the N+1 client, the connection does not fail, apparently.
For example: I set MaxConnections=1 in the server, the first client connects to it and the server OnConnect event is raised, while in the client OnStatus event I get two messages:
message 1: Connecting to 10.0.0.16.
message 2: Connected.
I try to connect the second client: the server OnConnect event is NOT raised (and this is what I expect) but in the client OnStatus event I get the same two messages (and this is not what I expect):
message 1: Connecting to 10.0.0.16.
message 2: Connected.
Then, the first client can exchange data with the server, and the second client can't (this seems right).
I don't understand why the second client connection does not fail explicitly, am I doing something wrong?
You are not doing anything wrong. This is normal behavior for TIdTCPServer.
There is no cross-platform socket API at the OS level 1 to limit the number of active/accepted connections on a TCP server socket, only to limit the number of pending connections in the server's backlog. That limit is handled by the TIdTCPServer::ListenQueue property, which is 15 by default (but this is more of a suggestion than a hard limit, the underlying socket stack can override this, if it wants to).
As such, the TIdTCPServer::MaxConnections property is implemented by simply accepting any client from the backlog that attempts to connect, and then immediately disconnects that client if the MaxConnections limit is exceeded.
So, if you try to connect more clients to TIdTCPServer than MaxConnections allows, those extra clients will not see any failure in connecting (unless the backlog fills up), but the server will not fire the OnConnect event for them. From the clients' perspectives, they actually did connect successfully, they were fully accepted by the server's underlying socket stack (the TCP 3way handshake is complete). However, they simply will not process the disconnect until they try to actually communicate with the server, then they will detect the disconnect, usually in the form of an EIdConnClosedGracefully exception (but that is not guaranteed).
1: on Windows only, there is a WSAAccept() function which has a callback that can reject pending connections before they leave the backlog queue. But Indy does not make use of this callback at this time.
Different TCP stacks exhibit different behavior. Your description is consistent with a TCP stack that simply ignores SYNs to a socket that has reached the maximum configured limit of pending and/or accepted connections: the SYN packet is simply dropped on the floor and not acknowledged.
The nature of TCP is that it's supposed to handle network drops. The sender does not immediately bail out, but will keep trying to connect, for some period of time. This part is consistent with all TCP implementations.
If you want your client to quickly fail a connection that does not get established within some set period of time you'll need to implement a manual timeout yourself.
I am currently writing a server application that should instruct multiple clients.
I am unsure about the concept i have designed and would like to receive feedback on it.
There are several identical clients that record and process sensor data. In addition, the results are sent to the server so that the server can react if necessary and send new parameters to the client.
The client should continue to work after the connection has ended and try to reconnect at the same time. If there is no connection, data do not have to be transmitted subsequently.
My concept is as follows:
The client logs on to the server.
The client requests an initialization -> server ok.
The client requests parameter A -> server sends parameter
The client requests parameter B -> server sends parameter
...
The client requests parameter Z -> server sends parameter
The client sends initialization finished -> server says ok
endless loop
Server queries measured value X -> client sends measured value
Server sends parameter Y -> client says ok.
So first the client is the master and asks for the initialization parameters it needs, then the server and client swap roles and the server becomes the master.
Should the connection break, the client should reconnect to the server. but would then with the command:
The client sends initialization finished -> server says ok
start so that the initialization is skipped.
The request of parameters runs as follows:
Infinite loop
Send (command)
Timout = 1 second
Receive
if (! Timout)
break
so i send the command and wait a little, if no answer comes i send the command again. this is shown here in abbreviated form. I wrote it in c ++ and I use several state machines. The state machines naturally also catch errors when the connection is interrupted and jump back to the initialization status ...
Since this is a multi-client application, I find it a little difficult. it runs as a single client. I have a class client in which a state machine and a socket are stored. the instance runs in a separate thread.
My problem now is, if the connection is lost, how can I establish a new connection (from an old client) to its instance (state machine). i would do this over some id comparison. so that the client sends his id first of all. (maybe also mac address ???)
I currently keep the connections to all clients open at all times. is that state of the art? or should you send a command, wait for an answer and close the connection again and then reconnect if necessary?
Many Thanks
Once TCP connection is established
each side can send for data.
Just that you have to get write -> read sequence correct.
This can be easily implemented using non blocking socket IO.
"My problem now is, if the connection is lost, how can I establish a
new connection (from an old client) to its instance (state machine). i
would do this over some id comparison. so that the client sends his id
first of all. (maybe also mac address ???)"
One solution is each client has a UUID, Client must tell server every time it connects its ID , server can keep a map of UUID vs client socket connection.
If Client is lost server can delete the mapping. Both server & client can detect lost connection that should not be a problem.
I have created simple TCP server which accepts connections and request and do some processing. I have referred following example and it is working fine. I send data to to connected client continuously and it is being printed on client side. Though after around period of 20-25 minutes, the client stops receiving any data. Also after such incident, the server shows running but now when I connect my client again to server, the server's connection accept handler doesnt get invoked. But I am able to telnet to the server's port and client is able to connect. Any idea what might be the problem?
Short Version: If I have an open tcp socket running in an ioserive what happens if I stop the service? Should data continue to queue in the tcp socket (assuming the server continues to send data and there has been no disconnect)? If so can I retrieve that data by resetting and restarting the ioserive?
Long Version: I'm trying to put a blocking interface around my asio based tcp socket API.
The user initially connects using the API which opens a socket to a server.
For each subsequent API call the ioservice is reset and started, then data is sent to the server using the tcp socket with boost::asio::write and a response waited on. The response from the server is handled using async_read_until. When the response is received the handler is invoked, the ioservice stopped and the original blocking API call releases back to the client, with the data from the server. This works OK for request-response type commands. In summary:
API blocking call
ioservice resets and starts
tcp packet sent to server
server responds
handler invoked
ioservice stopped
API call released and data passed to user
Another command is a request-response that starts a broadcast from the server that updates an internal cache on the client-side. The idea is that the user accesses this cache with another API function after an initial attempt to refresh it using an ioservice start-stop process similar to above. However, before this attempt the code checks to see if any data is available on the socket using one of the options below:
bool is_data_available() {
//boost::asio::socket_base::bytes_readable command(true);
//socket_->io_control(command);
//return command.get() > 0;
return socket_->available() > 0;
}
There is never any data, even though the server has logged as having sent the data.
So summary for the broadcast:
Successfully executes the previous list of bullet points to start the broadcasting
Observe that the server has sent the data
Call above code block to see if any data in the socket (note that the service has not been started at this point)
Never any data
Here's my problem: client connects to server via tcp-socket, server accepts this connection, then client sends some data to server in random periods of time, how should server knows when to read data from client. In other words is there some kind of listener on data receiving?
In general the server should always be waiting for data on each connection - i.e. when processing a request from the client, the server should immediately start another async_read request on that connection to wait for the next request (and once the server has received a complete request, that request gets processed and so on).