How to synchronize timers of two programs - c++

I am making an application where I have a client and a server. The client will send some coordinates to the server which will use those to move a robot. What I want is to synchronize the timers, used for time stamping log data, so that I can compare the input vs output. The communication is done through TCP/IP, and the client is done in C++ while the Server is in RAPID (ABB robotic programming language). My problem is that the timers are not synched properly.
Right Now the timers start when the connection between the two is established:
Server side:
ListenForConnection;
startTimer;
Client side:
connectToServer;
startTimer;
This does not work. Is there a technique to ensure that the timers are synchronized?
NB: The server can only be connected through LAN.

You need a protocol between client and server to pass the timestamp.
Right now, presumably you have a protocol for sending coordinates. You need to extend that somehow to allow one side to send timer information to the other side.
The easiest is if you have two way communication capability. In that case, the client does
Connect to server
Keep asking until the server is there
Server says "yes I'm here, the time is 1:00"
The client starts sending coords
If the server has no way to send to the client, then the client needs to send a timestamp from time to time, which the server recognises as being a time, not a coordinate. The two will not be synched until this happens the first time.

Related

TCP push-pull socket server design

I am designing a cross-platform messaging service as a learning exercise. I have programmed socket-based servers before, but always a "client-polls-server" design, like a web server. I want to be able to target mobile platforms, and I read that polling is a battery drain, so I would like to do push notification.
The server will be TCP-based, written in C++. What I'm having trouble getting my head around is how to manage the bi-directional nature of the design. I need a client to be able to send packets to the server as normal, but also listen for packets. How do I mitigate situations like, the client is sending data when the server is trying to send to it, or it's blocked listening for data but then needs to send something?
For example, consider the following crude diagram:
So, let's say client A is in the middle of sending a chunk of data (arrow 1). While this is happening, client B sends a message (arrow 2), which causes the server to attempt to send data back to client A (arrow 3), but client A hasn't finished sending arrow 1 yet. What happens in this instance? Should I setup 2 separate ports on each client, one for inbound, one for outbound? Do I need to keep track of the state of each connection?
Or is there a better approach to this altogether?
One socket port is inherently bidirectional. To handle both inbound and outbound traffic more or less concurrently you need to use nonblocking sockets.
I think the solution is pretty simple. The TCP server should have a list with connected clients. Since a TCP connection is bi-directional, the push mechanism is quite simple.
Another important thing, as long as your server isn't multithreaded, you can read from or write to one client at the same time.

Network transmission time from the TCP/IP stack

Every time a client connects to my server I would like to log the transmission time between the client and the server - is there a way of doing this without having to ping the client. My server is written in C++ and listens on a LAN TCP/IP socket. I was hoping that there was something on the TCP/IP stack that I could use once every client connected.
Update
Transmission time: Number of milliseconds that it took the client to connect to the server over a computer network like WAN/LAN. Basically the same result that you would get from a running ping client1.example.com in a shell.
There is no completely portable way to find the time the client took to setup the connection, since there is not API to query the kernel for this information. But you have some avenues:
If you can capture on the interface you can easily measure the time from the first SYN to the ACK. You should be able to do this on most systems.
On some Unix systems you can enable the SO_DEBUG socket option which might give you this information. In my experience SO_DEBUG is hard to use.
There's nothing explicitly built in to the TCP APIs to do what you want, but assuming you have control over the server and clients, you could do something like:
When the client's connect() completes, have the client record the current time (using gettimeofday() or similar)
When the server accepts the TCP connection, have it immediately send back some data to the client (doesn't matter what the data is)
When the client receives the data from the server, have the client record the current time again, and subtract from it the time recorded in (1)
Now you have the TCP round-trip time; divide it by two to get an estimation of the one-way time.
Of course this method will include in the measurement the vagaries of the TCP protocol (e.g. if a TCP packet gets lost on the way from the server to the client, the packet will need to be retransmitted before the client sees it, and that will increase the measured time). That may be what you want, or if not you could do something similar with UDP packets instead (e.g. have the client send a UDP packet and the server respond, and the client then measures the elapsed time between send() and recv()). Of course with UDP you'll have to deal with firewalls and NAT blocking the packets going back to the client, etc.

most efficient way to send function calls to clients from server using Winsock 7 C++

I have used C++ & Winsock to create both server and client applications. After connecting to the server, the client displays an animation sequence. The server machine can handle multiple client connections and keeps a count of the total number of clients connected.
Currently, the client animation sequence begins as soon as the client connects. What I want to happen, is when 2 clients have connected to the server, the server should send a message to client 1 to call the Render() function (which is in the client) then, at a later time, send another message to client 2 which calls the same Render() function.
Just looking for some help as to the best way to achieve this.
Thanks
You can't send function calls (in any direct meaning of the word), since functions live within a single process space and can't (easily) be sent across a socket connection.
What you can do is send a message which the client will act on and call the desired function.
Depending on what protocols you are using, this could be as simple as the server sending a single byte (e.g. 'R' for render, or something) to each client's TCP connection, and the client code would know to call Render() when it receives that byte. More complex protocols might encode the same information more elaborately, of course.
In interpreted languages (e.g. Java or Python) it is possible to send actual code across the socket connect, in the form of Java .class files or Python source text, or etc. But it's almost always a bad idea to do so, as an attacker could exploit the mechanism to send malware to the client and have the client execute it.

Multithreaded Server Issue

I am writing a server in linux that is supposed to serve an API.
Initially, I wanted to make it Multi-threaded on a single port, meaning that I'd have multiple threads working on various request received on a single port.
One of my friends told me that it not the way it is supposed to work. He told me that when a request is received, I first have to follow a Handshake procedure, create a thread that is listening to some other port dedicated to the request and then redirect the requested client to the new port.
Theoretically, it's very interesting but I could not find any information on how to implement the handshake and do the redirection. Can someone help?
If I'm not wrong in interpreting your responses, once I create a multithreaded server with a main thread listening to a port, and creates a new thread to handle requests, I'm essentially making it multithreaded on a single port?
Consider the scenario where I get a large number of requests every second. Isn't it true that every request on the port should now wait for the "current" request to complete? If not, how would the communication still be done: Say a browser sends a request, so the thread handling this has to first listen to the port, block it, process it, respond and then unblock it.
By this, eventhough I'm having "multithreads" , all I'm using is one single thread at a time apart from the main thread because the port is being blocked.
What your friend told you is similar to passive FTP - a client tells the server that it needs a connection, the server sends back the port number and the client creates a data connection to that port.
But all you wanted to do is a multithreaded server. All you need is one server socket listening and accepting connections on a given port. As soon as the automatic TCP handshake is finished, you'll get a new socket from the accept function - that socket will be used for communication with the client that has just connected. So now you only have to create a new thread, passing that client socket to the thread function. In your server thread, you will then call accept again in order to accept another connection.
TCP/IP does the handshake, if you can't think of any reason to do a handshake than your application does not demand it.
An example of an application specific handshake could be for user authentication.
What your colleague is suggesting sounds like the way FTP works. This is not a good thing to do -- the internet these days is more or less used for protocols which use a single port, and having a command port is bad. One of the reasons is because statefull firewalls aren't designed for multi-port applications; they have to be extended for each individual application that does things this way.
Look at ASIO's tutorial on async TCP. There one part accept connections on TCP and spawns handlers that each communicate with a single client. That's how TCP-servers usually work (including HTTP/web, the most common tcp protocol.)
You may disregard the asynchronous stuff of ASIO if you're set on creating a thread per connection. It doesn't apply to your question. (Going fully async and have one worker-thread per core is nice, but it might not integrate well with the rest of your environment.)

What is the best way to implement a heartbeat in C++ to check for socket connectivity?

Hey gang. I have just written a client and server in C++ using sys/socket. I need to handle a situation where the client is still active but the server is down. One suggested way to do this is to use a heartbeat to periodically assert connectivity. And if there is none to try to reconnect every X seconds for Y period of time, and then to time out.
Is this "heartbeat" the best way to check for connectivity?
The socket I am using might have information on it, is there a way to check that there is a connection without messing with the buffer?
If you're using TCP sockets over an IP network, you can use the TCP protocol's keepalive feature, which will periodically check the socket to make sure the other end is still there. (This also has the advantage of keeping the forwarding record for your socket valid in any NAT routers between your client and your server.)
Here's a TCP keepalive overview which outlines some of the reasons you might want to use TCP keepalive; this Linux-specific HOWTO describes how to configure your socket to use TCP keepalive at runtime.
It looks like you can enable TCP keepalive in Windows sockets by setting SIO_KEEPALIVE_VALS using the WSAIoctl() function.
If you're using UDP sockets over IP you'll need to build your own heartbeat into your protocol.
Yes, this heartbeat is the best way. You'll have to build it into the protocol the server and client use to communicate.
The simplest solution is to have the client send data periodically and the server close the connection if it hasn't received any data from the client in a particular period of time. This works perfectly for query/response protocols where the client sends queries and the server sends responses.
For example, you can use the following scheme:
The server responds to every query. If the server does not receive a query for two minutes, it closes the connection.
The client sends queries and keeps the connection open after each one.
If the client has not send a query for one minute, it sends an "are you there" query. The server responds with "yes I am". This resets the server's two minutes timer and confirms to the client that the connection is still available.
It may be simpler to just have the client close the connection if it hasn't needed to send a query for the past minute. Since all operations are initiated by the client, it can always just open a new connection if it needs to perform a new operation. That reduces it to just this:
The server closes the connection if it hasn't received a query in two minutes.
The client closes the connection if it hasn't needed to send a query in one minute.
However, this doesn't assure the client that the server is present and ready to accept a query at all times. If you need this capability, you will have to implement an "are you there" "yes I am" query/response into your protocol.
If the other side has gone away (i.e. the process has died, the machine has gone down, etc.), attempting to receive data from the socket should result in an error. However if the other side is merely hung, the socket will remain open. In this case, having a heartbeat is useful. Make sure that whatever protocol you are using (on top of TCP) supports some kind of "do-nothing" request or packet - each side can use this to keep track of the last time they received something from the other side, and can then close the connection if too much time elapses between packets.
Note that this is assuming you're using TCP/IP. If you're using UDP, then that's a whole other kettle of fish, since it's connectionless.
Ok, I don't know what your program does or anything, so maybe this isn't feasible, but I suggest that you avoid trying to always keep the socket open. It should only be open when you are using it, and should be closed when you are not.
If you are between reads and writes waiting on user input, close the socket. Design your client/server protocol (assuming you're doing this by hand and not using any standard protocols like http and/or SOAP) to handle this.
Sockets will error if the connection is dropped; write your program such that you don't lose any information in the case of such an error during a write to the socket and that you don't gain any information in the case of an error during a read from the socket. Transactionality and atomicity should be rolled into your client/server protocol (again, assuming you're designing it yourself).
maybe this will help you, TCP Keepalive HOWTO
or this SO_SOCKET