Move from socket to webservice: double timeout - web-services

I have a client/server application that communicates via tcp/ip sockets over an often unreliable wireless network.
To make it responsive in case of a connection error,I created this protocol.
1) client sends a request
2) server confirms reception of request (1 second timeout)
3) server processes while client wait(may take up to 10 seconds) (20 seconds timeout)
4) server sends response
Sometimes the request command get lost (the client sends it over an open connection but the server never receives it), but with this protocol I know immediately if the command has been received and is going to be processed.
What I'm asking (I've made some test with RESTSharp and ServiceStack) is: is possible to do something like this with a webservice? where a client before the response receives a confirmation that the request has been received?
Thanks,
Mattia

It's strange to see this type of error handling in Application code since if a HTTP request wasn't successful it will already throw an Exception.
To get a confirmation receipt in Application code you would need to add a 2nd request implementing some kind of Push Event / Comet technique to get the server to notify the client of different events, e.g. what Request Ids have been received.

Related

concept feedback for a multi-client server c++

I am currently writing a server application that should instruct multiple clients.
I am unsure about the concept i have designed and would like to receive feedback on it.
There are several identical clients that record and process sensor data. In addition, the results are sent to the server so that the server can react if necessary and send new parameters to the client.
The client should continue to work after the connection has ended and try to reconnect at the same time. If there is no connection, data do not have to be transmitted subsequently.
My concept is as follows:
The client logs on to the server.
The client requests an initialization -> server ok.
The client requests parameter A -> server sends parameter
The client requests parameter B -> server sends parameter
...
The client requests parameter Z -> server sends parameter
The client sends initialization finished -> server says ok
endless loop
Server queries measured value X -> client sends measured value
Server sends parameter Y -> client says ok.
So first the client is the master and asks for the initialization parameters it needs, then the server and client swap roles and the server becomes the master.
Should the connection break, the client should reconnect to the server. but would then with the command:
The client sends initialization finished -> server says ok
start so that the initialization is skipped.
The request of parameters runs as follows:
Infinite loop
Send (command)
Timout = 1 second
Receive
if (! Timout)
break
so i send the command and wait a little, if no answer comes i send the command again. this is shown here in abbreviated form. I wrote it in c ++ and I use several state machines. The state machines naturally also catch errors when the connection is interrupted and jump back to the initialization status ...
Since this is a multi-client application, I find it a little difficult. it runs as a single client. I have a class client in which a state machine and a socket are stored. the instance runs in a separate thread.
My problem now is, if the connection is lost, how can I establish a new connection (from an old client) to its instance (state machine). i would do this over some id comparison. so that the client sends his id first of all. (maybe also mac address ???)
I currently keep the connections to all clients open at all times. is that state of the art? or should you send a command, wait for an answer and close the connection again and then reconnect if necessary?
Many Thanks
Once TCP connection is established
each side can send for data.
Just that you have to get write -> read sequence correct.
This can be easily implemented using non blocking socket IO.
"My problem now is, if the connection is lost, how can I establish a
new connection (from an old client) to its instance (state machine). i
would do this over some id comparison. so that the client sends his id
first of all. (maybe also mac address ???)"
One solution is each client has a UUID, Client must tell server every time it connects its ID , server can keep a map of UUID vs client socket connection.
If Client is lost server can delete the mapping. Both server & client can detect lost connection that should not be a problem.

Strange behaviour of TCP-connection

We have the following configuration: Server, Client, DHCP-Server. The server runs on a static IP in "client mode". This means, that server has a list of all clients (hostnames) and builds TCP-connections. Clients have dynamic IPs.
How does it working: 1) Server creates a connection to a client and 2) Server waits for any data from a client (we use ACE framework and reactor-pattern). To have the list with clients up to date, we have added an additional timer, that sends a heartbeat to all clients.
And there is one strange behavior:
let's say a hostname "somehost.internal" has IP "10.10.100.50"
Time = t: connect to hostname "somehost.internal"
Time = t+1: change IP of the client to "10.10.100.60"
Time = t+2: heartbeat timer send data to an existing endpoint (10.10.100.50) and successfully returns (why??? this IP is not accessible)
in Wireshark I can see, that Retransmission packages
Time = t+5: some seconds later event handler returns with an error and the connection to the endpoint (10.10.100.50) will be closed
Do you have any advice, why a blocking send-function successfully returns when remote endpoint does not exist anymore?
I assume in step 3 you only send a heartbeat message to the client but do not actually wait for a response from the client on that heartbeat message.
The socket send() function only forwards the data to the OS. It does not mean that the data is actually transmitted or has been received by the peer.
The OS buffers data, transmits data over the network, waits for acknowledgements, retransmits data, etc.. This all takes time. Eventually the OS will decide that the other end no longer responds and marks the connection as invalid.
Only then when you perform a socket function on that connection is the application notified of any issues.

Whats happens to an open asio tcp socket when the service is stopped and restarted?

Short Version: If I have an open tcp socket running in an ioserive what happens if I stop the service? Should data continue to queue in the tcp socket (assuming the server continues to send data and there has been no disconnect)? If so can I retrieve that data by resetting and restarting the ioserive?
Long Version: I'm trying to put a blocking interface around my asio based tcp socket API.
The user initially connects using the API which opens a socket to a server.
For each subsequent API call the ioservice is reset and started, then data is sent to the server using the tcp socket with boost::asio::write and a response waited on. The response from the server is handled using async_read_until. When the response is received the handler is invoked, the ioservice stopped and the original blocking API call releases back to the client, with the data from the server. This works OK for request-response type commands. In summary:
API blocking call
ioservice resets and starts
tcp packet sent to server
server responds
handler invoked
ioservice stopped
API call released and data passed to user
Another command is a request-response that starts a broadcast from the server that updates an internal cache on the client-side. The idea is that the user accesses this cache with another API function after an initial attempt to refresh it using an ioservice start-stop process similar to above. However, before this attempt the code checks to see if any data is available on the socket using one of the options below:
bool is_data_available() {
//boost::asio::socket_base::bytes_readable command(true);
//socket_->io_control(command);
//return command.get() > 0;
return socket_->available() > 0;
}
There is never any data, even though the server has logged as having sent the data.
So summary for the broadcast:
Successfully executes the previous list of bullet points to start the broadcasting
Observe that the server has sent the data
Call above code block to see if any data in the socket (note that the service has not been started at this point)
Never any data

boost:asio receiving data from client

Here's my problem: client connects to server via tcp-socket, server accepts this connection, then client sends some data to server in random periods of time, how should server knows when to read data from client. In other words is there some kind of listener on data receiving?
In general the server should always be waiting for data on each connection - i.e. when processing a request from the client, the server should immediately start another async_read request on that connection to wait for the next request (and once the server has received a complete request, that request gets processed and so on).

using sockets, what is the best practice to signal an end of communications?

I am writing a client-server application using sockets in C++.
The protocol for communications is essentially:
The client connects to the server.
The client "sends" an ASCII command to the server.
The server executes the command remotely, and gets the results, and sends the results back to the client.
the results can be multiple megabytes of data. Once all the results are sent to the client, I would like the server to signal the client that it's done.
Is the best way to closesocket(), or should it send a message that indicates to the client that there are no more results, and the client can decide whether to close the socket or not? The drawback with closing the socket is that the client will need to establish a new connection if it wants to execute another command, but the drawback of sending a message back from the server is that the client needs to scan every recv to determine if the results are done.
Which is the best practice?
I would take a slightly lateral approach:
Client sends command to server
Server send size of response and then the real response
Client can issue new command / close connection
In this way the client knows how much to read and can decide whether to close the connection or not.