How good is NTP for distributed time synchronization? - c++

How accurate is NTP for keeping a set of servers time synchronized?
I'm writing a service which requires a set of servers (some acting as clients, some as servers) synchronized to second level granularity. I'm wondering if NTP is the best thing to use, or if there's something better?
Should I run a ntp server on one of them, and have the others use that as their source? Any other recommendations/horror stories with NTP?
All the servers are linux.
Update: Service levels:
I'd like the one server to be accurate UTC(second level, not microsecond or such), and I'd like all the other servers to be the same ts as that one server, regardless of whether its accurate UTC or not (events are received by this one server from multiple locations at various intervals, I require all those events to be at the same "relative" ts. No, I can't have the main server TS the events as they come in, because that'll require storing an offset (when the event actually happened and when it was logged, which requires a whole lot of extra work), and that complicates matters needlessly.
I've currently set up one server as stratum 2 timeserver, using some startum 1 GPS sources as servers in ntp.conf, on the other servers, I've set this server to be the sole server in ntp.conf.
I hope this will be enough.
Thank you!

NTP will keep you within a second well enough for most applications.
If you need higher precision, and all the servers are running *nix I would investigate implementing Precision Time Protocol. It involves multiple parent clocks and negotiation to find a reliable source in the network. This is the time protocol recommended for timestamping events in the power industry (e.g. accurate timestamping in the log files for relay actions and metering alarms aided in the investigation of the Northeast Blackout of 2003).

First off, you might have a look at the Wikipedia NTP page.
Basically, to start with (I preach this regularly) state what the service levels you want might be. Do you need accurate UTC? To what tolerance? That is, do you really need to know what time it is?
Or do you simply want precise synchronization among the systems?
How many machines are we talking about, and are they geographically distributed?
Some options:
accurate time: Set up at least one server as stratum 2, and have it reference at least 3 stratum 1 servers. If you have lots of servers, make that more than one; obviously you get more reliability by having no single point of failure.
precise synchronization: set up NTP peers.
accurate time and geographical distribution: more than one stratum 2 server, as above, with one "near" each cluster; they can peer at stratum 2 to improve the voting.
I don't think there's anything well known better than NTP that's available.
Update Another question mentions the PTP precision time protocol (IEEE 1588) This is excellent for precise synchronization, but depends on multicast.
Also, it's worth considering getting a GPS time source.

Yes, set up one of your servers as your in-house NTP server, and sync the others to that. It gives you accuracy typically within milliseconds, as I remember.
If any of your servers are way off -- and I can't remember what constitutes 'way off' -- NTP won't fix it. There is a way to automatically fix that but I can't remember at the moment.

Related

Are websockets a suitable lowest latency and robust real-time communication protocol between two nearby servers in the same AWS Availability Zones?

Suitable technologies I am aware of:
Websockets
Zeromq
Please suggest others if they are a better fit for my problem.
For this use case I have just two machines, the sender and the receiver, and it's important to note they are fixed "nearby" each other, as they will be in the same availability zone on AWS. Answers which potentially relate to message passing over large spans of the internet aren't necessarily applicable. Note also the receiver server isn't queuing these up as tasks, it will just be forwarding select message feeds to website visitors over a websocket. The sending server does a lot of pre-processing and collating to the messages.
The solution needs to:
Be very high throughput. At present the sending server is processing about 10,000 messages per second (written in Rust) without breaking a sweat. Bursty traffic may increase this up to 20,000 or a bit more. I know zeromq can handle this.
Robust. The communication pipe will be open 24/7 365 days per year. My budget is extremely limited in terms of setting up clusters of machines as failovers so I have to do the best I can with two machines.
Message durability isn't required or a concern, the receiving server isn't required to store anything, it just needs all the data. The sender server asynchronously writes a durable 5 second summary of the data to a database and to a cache.
Messages must retain the order in which they are sent.
Low latency. This is very important as the data needs to be as realtime as possible.
A websocket seems to get this job done for 1 to 4. What I don't know is how robust a websocket is for communication that's 24 hours a day 7 days a week. I've observed websocket connections getting dropped online in general (Of course I will write re-connect code, heartbeat mointoring if required but still this concerns me). I also wonder if the high throughput is too much for the websocket.
I have zero experience in this kind of problem but I have a very good websocket library that I'm comfortable using. I ruled out Apache Kafka as it seems expensive to get high throughput, tricky to manage with dev ops (zookeeper) and seems overkill as I don't need durability and it's only communication between 2 machines. So I'm hoping for a simple solution.
It sounds like you are explaining precicely what EC2 cluster placement groups provide: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-placementgroup.html
Edit: You should be able to create the placement group with 2 machines to cover your limited budget. Using larger instances, according to your budget, will also support higher network throughput.
Point 4 looks like it would be supported by SQS FIFO though, despite the fact that SQS FIFO queues only support up to 3,000 messages per second with batching.
A managed streaming solution like Kinesis Data Streams would definitely cover your use case, at scale, much better than a raw web socket. Using Kinesis Client Libraries, you can write your consumer to read from the stream.
AWS also has a Managed Kafka service to rule out the overhead and management of necessary components like Apache ZK: https://aws.amazon.com/msk/

Debugging network applications and testing for synchronicity?

If I have a server running on my machine, and several clients running on other networks, what are some concepts of testing for synchronicity between them? How would I know when a client goes out-of-sync?
I'm particularly interested in how network programmers in the field of game design do this (or just any continuous network exchange application), where realtime synchronicity would be a commonly vital aspect of success.
I can see how this may be easily achieved on LAN via side-by-side comparisons on separate machines... but once you branch out the scenario to include clients from foreign networks, I'm just not sure how it can be done without clogging up your messaging system with debug information, and therefore effectively changing the way that synchronicity would result without that debug info being passed over the network.
So what are some ways that people get around this issue?
For example, do they simply induce/simulate latency on the local network before launching to foreign networks, and then hope for the best? I'm hoping there are some more concrete solutions, but this is what I'm doing in the meantime...
When you say synchronized, I believe you are talking about network latency. Meaning, that a client on a local network may get its gaming information sooner than a client on the other side of the country. Correct?
If so, then I'm sure you can look for books or papers that cover this kind of topic, but I can give you at least one way to detect this latency and provide a way to manage it.
To detect latency, your server can use a type of trace route program to determine how long it takes for data to reach each client. A common Linux program example can be found here http://linux.about.com/library/cmd/blcmdl8_traceroute.htm. While the server is handling client data, it can also continuously collect the latency statistics and provide the data to the clients. For example, the server can update each client on its own network latency and what the longest latency is for the group of clients that are playing each other in a game.
The clients can then use the latency differences to determine when they should process the data they receive from the server. For example, a client is told by the server that its network latency is 50 milliseconds and the maximum latency for its group it 300 milliseconds. The client then knows to wait 250 milliseconds before processing game data from the server. That way, each client processes game data from the server at approximately the same time.
There are many other (and probably better) ways to handle this situation, but that should get you started in the right direction.

Optimizing Jetty for heartbeat detection of thousands of machines?

I have a large number of machines (thousands and more) that every X seconds would perform an HTTP request to a Jetty server to notify they are alive. For what value of X should I use persistent HTTP connections (which limits number of monitored machines to number of concurrent connections), and for what value of X the client should re-establish a TCP connection (which in theory would allow to monitor more machines with the same Jetty server).
How would the answer change for HTTPS connections? (Assuming CPU is not a constraint)
This question ignores scaling-out with multiple Jetty web servers on purpose.
Update: Basically the question can be reduced to the smallest recommended value of lowResourcesMaxIdleTime.
I would say that this is less of a jetty scaling issue and more of a network scaling issue, in which case 'it depends' on your network infrastructure. Only you really know how your network is laid out and what sort of latencies are involved in order to come up with a value of X.
From an overhead perspective the persistent HTTP connections will of course have some minor effect (well I say minor but depends on your network) and the HTTPS will again have a larger impact....but only from a volume of traffic perspective since you are assuming CPU is not a constraint.
So from a jetty perspective, it really doesn't need to be involved in the question, you seem to ultimately be asking for help optimizing bytes of traffic on the wire so really you are looking for the best protocol at this point. Since with HTTP you are having to mess with headers for each request you may be well served looking at something like spdy or websocket which will give you persistent connections but are optimized for low round trip network overhead. But...they seem sort of overkill for a heartbeat. :)
How about just make them request at different time? Assume first machine request, then you pick a time to response to that machine as the next time to heart beat of that machine (also keep the id/time at jetty server), the second machine request, you can pick another time to response to second machine.
In this way, you can make each machine perform heart beat request at different time so no concurrent issue.
You can also use a random time for the first heart beat if all machines might start up at the same time.

why is the lift web framework scalable?

I want to know the technical reasons why the lift webframework has high performance and scalability? I know it uses scala, which has an actor library, but according to the install instructions it default configuration is with jetty. So does it use the actor library to scale?
Now is the scalability built right out of the box. Just add additional servers and nodes and it will automatically scale, is that how it works? Can it handle 500000+ concurrent connections with supporting servers.
I am trying to create a web services framework for the enterprise level, that can beat what is out there and is easy to scale, configurable, and maintainable. My definition of scaling is just adding more servers and you should be able to accommodate the extra load.
Thanks
Lift's approach to scalability is within a single machine. Scaling across machines is a larger, tougher topic. The short answer there is: Scala and Lift don't do anything to either help or hinder horizontal scaling.
As far as actors within a single machine, Lift achieves better scalability because a single instance can handle more concurrent requests than most other servers. To explain, I first have to point out the flaws in the classic thread-per-request handling model. Bear with me, this is going to require some explanation.
A typical framework uses a thread to service a page request. When the client connects, the framework assigns a thread out of a pool. That thread then does three things: it reads the request from a socket; it does some computation (potentially involving I/O to the database); and it sends a response out on the socket. At pretty much every step, the thread will end up blocking for some time. When reading the request, it can block while waiting for the network. When doing the computation, it can block on disk or network I/O. It can also block while waiting for the database. Finally, while sending the response, it can block if the client receives data slowly and TCP windows get filled up. Overall, the thread might spend 30 - 90% of it's time blocked. It spends 100% of its time, however, on that one request.
A JVM can only support so many threads before it really slows down. Thread scheduling, contention for shared-memory entities (like connection pools and monitors), and native OS limits all impose restrictions on how many threads a JVM can create.
Well, if the JVM is limited in its maximum number of threads, and the number of threads determines how many concurrent requests a server can handle, then the number of concurrent requests will be determined by the number of threads.
(There are other issues that can impose lower limits---GC thrashing, for example. Threads are a fundamental limiting factor, but not the only one!)
Lift decouples thread from requests. In Lift, a request does not tie up a thread. Rather, a thread does an action (like reading the request), then sends a message to an actor. Actors are an important part of the story, because they are scheduled via "lightweight" threads. A pool of threads gets used to process messages within actors. It's important to avoid blocking operations inside of actors, so these threads get returned to the pool rapidly. (Note that this pool isn't visible to the application, it's part of Scala's support for actors.) A request that's currently blocked on database or disk I/O, for example, doesn't keep a request-handling thread occupied. The request handling thread is available, almost immediately, to receive more connections.
This method for decoupling requests from threads allows a Lift server to have many more concurrent requests than a thread-per-request server. (I'd also like to point out that the Grizzly library supports a similar approach without actors.) More concurrent requests means that a single Lift server can support more users than a regular Java EE server.
at mtnyguard
"Scala and Lift don't do anything to either help or hinder horizontal scaling"
Ain't quite right. Lift is highly statefull framework. For example if a user requests a form, then he can only post the request to the same machine where the form came from, because the form processeing action is saved in the server state.
And this is actualy a thing which hinders scalability in a way, because this behaviour is inconistent to the shared nothing architecture.
No doubt that lift is highly performant but perfomance and scalability are two different things. So if you want to scale horizontaly with lift you have to define sticky sessions on the loadbalancer which will redirect a user during a session to the same machine.
Jetty maybe the point of entry, but the actor ends up servicing the request, I suggest having a look at the twitter-esque example, 'skitter' to see how you would be able to create a very scalable service. IIRC, this is one of the things that made the twitter people take notice.
I really like #dre's reply as he correctly states the statefulness of lift being a potential problem for horizontal scalability.
The problem -
Instead of me describing the whole thing again, check out the discussion (Not the content) on this post. http://javasmith.blogspot.com/2010/02/automagically-cluster-web-sessions-in.html
Solution would be as #dre said sticky session configuration on load balancer on the front and adding more instances. But since request handling in lift is done in thread + actor combination you can expect one instance handle more requests than normal frameworks. This would give an edge over having sticky sessions in other frameworks. i.e. Individual instance's capacity to process more may help you to scale
you have Akka lift integration which would be another advantage in this.

World Clock Webservice?

What is the most reliable World Clock Webservice that you use?
Unfortunately, you'll probably never get a really accurate atomic clock webservice due to latency issues with the transport of the messages/packets back and forth from your machine to the server.
Most atomic clocks that are accessible over the internet use a specific protocol called the Network Time Protocol that includes a jitter buffer which specifically accounts for and adjusts based upon the latency of the transport. This provides a more accurate representation of the atomic clock's time than using a web-service over HTTP.
I think if you must use a webservice, the most accurate one will be the one hosted on a server that is physically and geographically closest to you and also has the least number of network hops to get from your own machine to the server, since this will reduce the latency of the packets.
Understood about latency. With that in mind, I go to NIST's site for US times and World Time Server for the rest. Don't know if either is the "best".
I think due to latency, there is no such thing as a reliable atomic clock webservice.
Here's a blog post which comes to the same conclusion.
Purists are quick to point to the accuracy problem. But I bet you could not even get perfectly accurate time even if your application was sitting on the same server as the atomic clock software itself.
I think there is a need for a clock Web Service. I can think of a few scenarios where it doesn't matter being off a few seconds.
Aside accuracy, another challenging area of serving up date and time is taking into account the daylight saving details of most country. That is something even the latest OSes struggle to get right. But that is definitely something that would make a clock Web Service valuable.
Since there are so few web services out there delivering time, http://www.timeapi.org/utc/now is only reliable web service that I know of (besides http://www.earthtools.org/timezone/0/0, which does not appear to be reliable). Therefore it's the most accurate one I can recommend, especially if you are just using it for determining the difference between local time and UTC time, which can be rounded to the nearest 15 minutes. And if you want the time in a specific time zone, replace utc with the three-letter abbreviation for the time zone -- i.e., http://www.timeapi.org/est/now for the Eastern Standard Time.
A NTP webservice would be fine as long as the latency is predictable. NTP is a wire protocol and very lightweight to remove any moving pieces that may cause additional variation in latency (aka jitter). A SOAP stack would introduce more variability.