Measuring NTPd in the Cloud II

 The problem with ntpd is the sudden performance degradations which can occur when conditions deviate from ‘ideal’. We have detailed these robustness issues of ntpd in prior work, including [17, 18]. Simply put, when path delay variability exceeds some threshold, which is a complex function of parameters, stability of the feedback control is lost, resulting in errors which can be large over small to very long periods. Recovery from such periods is also subject to long convergence times [...]

Now to the results. As expected, and from the very
first migration, ntpd exhibits extremely large errors (from
-1 to 27 s!) for periods exceeding 15 minutes (see zoom
in middle plot) and needs at least another hour to converge
to a reasonable error level.

from  Virtualize Everything but Time, Broomhead ,Cremean,Ridoux,Veitch

TimeKeeper is a “from ground up” implementation of  clock synchronization over both PTP and NTP and does not share NTPd limits.


Economics of Free Software

Fate has made me the “money guy” for OpenSSL so I’m going to talk about that for a bit.

As has been well reported in the news of late, the OpenSSL Software Foundation (OSF) is a legal entity created to hustle money in support of OpenSSL. By “hustle” I mean exactly that: raising revenue by any and all means[1]. OSF typically receives about US$2000 a year in outright donations and sells commercial software support contracts[2] and does both hourly rate and fixed price “work-for-hire” consulting as shown on the OSF web site. The media have noted that in the five years since it was created OSF has never taken in over $1 million in gross revenues annually.

Thanks to that publicity there has been an outpouring of grassroots support from the OpenSSL user community, roughly two hundred donations this past week[3] along with many messages of support and encouragement[4]. Most were for $5 or $10 and, judging from the E-mail addresses and names, were from all around the world. I haven’t finished entering all of them to get an exact total, but all those donations together come to about US$9,000.

OpenSSL uses a “give away code and charge for consulting” model that FSMLabs began with in 1999. We couldn’t make it work either.


At issue in Apple versus Samsung patent fight

A system and method causes a computer to detect and perform actions on structures identified in computer data. The system provides an analyzer server, an application program interface, a user interface and an action processor. The analyzer server receives from an application running concurrently data having recognizable structures, uses a pattern analysis unit, such as a parser or fast string search function, to detect structures in the data, and links relevant actions to the detected structures. The application program interface communicates with the application running concurrently, and transmits relevant information to the user interface. Thus, the user interface can present and enable selection of the detected structures, and upon selection of a detected structure, present the linked candidate actions. Upon selection of an action, the action processor performs the action on the detected structure




Managed services (and cloud) for financial industry

The nature of data security is constantly changing for the financial industry. With the ever-growing need for tighter IT infrastructure security and the increasing prevalence of BYOD-friendly workplaces, financial institutions are turning to managed services to help handle these changes. In fact, a TABB Group forecast predicts that by 2016, 50% of financial institutions will be opting for managed services to help account for their IT infrastructure management. Similarly, a survey by TSIA cites managed services revenue today as the fastest-growing in the service line (which includes professional services, support services, and field services). [Onramp]

Cassandra Cluster Synchronization

Cassandra is a highly-distributable NoSQL database with tunable consistency. What makes it highly distributable makes it also, in part, vulnerable: the whole deployment must run on synchronized clocks.

It’s quite surprising that, given how crucial this is, it is not covered sufficiently in literature. And, if it is, it simply refers to installation of a NTP daemon on each node which – if followed blindly – leads to really bad consequences. You will find blog posts by users who got burned by clock drifting. [Holub]

Cassandra labels updates with times and then uses those times to figure out the freshest update. Suppose machine A and B are both working with a database of records. B updates record R, then A  updates record R and 3 out of 5 machines holding copies of R get this second update. Then B asks for record R and the read operation gets 5 responses – it can throw away the stale copies of R and forward the fresh copy because the timestamp on the update from A is more recent than those on the older copies.   But this only works if the times are right – which means you need solid clock synchronization. Time synchronization clients like NTPd and PTDp (and variants) will silently fail, so the system can lose synchronization without  notice.  This will silently corrupt data. One of the most sophisticated parts of TimeKeeper is the algorithm to detect incorrect times, to failover where possible, and to alarm where not possible.

See: TimeKeeper, Clouds, Big Data

NTPd DOS attacks

Miscreants who earlier this week took down servers for League of Legends,, and other online game services used a never-before-seen technique that vastly amplified the amount of junk traffic directed at denial-of-service targets.

Rather than directly flooding the targeted services with torrents of data, an attack group calling itself DERP Trolling sent much smaller sized data requests to time-synchronization servers running the Network Time Protocol (NTP). By manipulating the requests to make them appear as if they originated from one of the gaming sites, the attackers were able to vastly amplify the firepower at their disposal. A spoofed request containing eight bytes will typically result in a 468-byte response to a victim, a more than 58-fold increase.              [Ars Technica]

Servers running TimeKeeper would diagnose such attacks early and ignore them.  In fact, whether TimeKeeper is serving NTP or Precision Time Protocol (PTP) it will shrug off many of the exploits that can bring down networks running free software time synchronization. The ubiquitous NTPd free server and client software is designed to expect a well controlled network with a limited range of possible errors and, certainly, without malicious requests.  TimeKeeper is designed for an enterprise environment where infrastructure failures are not tolerable.    The protocols simply define how clock updates are distributed on the network and the format of the packets, they do not solve problems of network robustness or security. Security, failover, audit trail, manageability and other critical enterprise service properties are the responsibility of the software that implements these protocols.

Managers of enterprise networks that depend on time distribution and synchronization need to determine whether these are critical services or not. If not, then lack of instrumentation, legacy security holes, and in-house patching may be acceptable. On the other hand, if timing is a critical service, then enterprise qualified software components are not optional. This issue should be of particular concern to firms that offer their own customers time as a service because failures on the customer side might be perceived by customers as a failure in the service itself and those failures may even blowback into the provider network.



Data tiedowns with reliable time stamping

Management teams are  growing more reliant on the ability to immediately  access and quickly sort through massive amounts of data to find the information they need - Data Governance for Financial Institution

A “data tiedown” is a reliable and cross-checked timestamp that secures a data item or group of data items to one or more fixed times – such as time of collection, generation, or storage. Data analysis cannot compensate for bad data and distributed processing makes it easy to generate bad data whether from new inputs or by corrupting existing data. As transaction rates increase and data size grows, traditional locking techniques are both more expensive to implement and easier to get wrong. Timestamps can be an important part of data integrity assurance by imposing order on data, even if it is generated or stored by multiple processing elements that are only loosely coupled. But timestamping is in itself a fragile process.  One common error mode for networks relying on IEEE 1588 Precision Time Protocol (PTP) for clock synchronization involves a loss or incorrect calculation of the number of leap seconds since the epoch.  If data integrity depends on timestamps a sudden jump of 35 or 16 seconds can have a devastating impact. So the integrity of timestamps is something that also  needs to be safeguarded. One way to do that is to do what TimeKeeper does – track multiple reference time sources so that the clock can be continuously cross checked against other clocks.  The timestamp is then provided with a record of how well it matches secondary, tertiary, or deeper sources. When these all match, the timestamp has high reliability. Errors can also be traced – there is a forensic methodology available to validate time and locate problems. Data is then tied to a timestamp and the timestamp is tied to a log that verifies its accuracy – the combination is an example of a data tiedown.


Challenges upgrading time synchronization in a global financial company

The real-world challenges of trying to upgrade time-synchronization without TimeKeeper are described in an excellent 2012 technical paper about a project at the company IMC Global Finance [IMC].  The article documents a two year, highly resourced project run by an expert engineering team – a nice data point for perhaps best case time synchronization using the open source alternatives to TimeKeeper. The tradeoff evaluation, of course, will be different in every enterprise, but the blunt analysis of limits to the existing open source technology contained in the IMC article is refreshing:

In all, this paper contributes concrete examples where PTP’s byzantine robustness, scalability and efficiency characteristics range between absent to poor – and attempts to raise awareness on the steps needed to build PTP solutions with the characteristics that global users want.

PTP, as it is currently defined, is not (yet) a viable solution beyond smallish LANs

The cautionary statements in the IMC paper will come as a surprise to many in the industry because PTP has been aggressively marketed as the solution to time synchronization. In practice, PTP is just a protocol specification –it is the implementation that matters, and many of the limitations that confounded the IMC developers are not limitations of the protocol, but limitations of the implementation in the open source clock synchronization software they utilized.

Clock Synchronization Background

Let’s step back and look at the underlying issue of clock synchronization. Application programs running on separate computers rely on “clocks” that are found in each computer and adjusted by software. If those clocks are not synchronized it is impossible to consolidate records because we won’t know when events happened or even the order of events. Furthermore, without synchronized clocks, application programs that work together cannot determine how long collaborative transactions took or how long data takes to get from machine to machine.  Clock synchronization is therefore essential to both data integrity and system management.  Synchronizing clocks down to levels of microseconds or below has become a requirement in some parts of financial trading (down to milliseconds or below in cloud based computing) and, increasingly, in other areas as well. So the basic problem is how to synchronize these clocks and keep them synchronized – particularly because the hardware clocks on computers tend to accumulate errors rapidly. That challenge is compounded in a cloud compute environment.

To synchronize, each application server must continuously correct its clock relying on updates sent over the network from some reference time source. One common type of reference time source is a device that gets the time from a Global Positioning System (GPS) or other satellites and then shares that time with “clients” over the network.  The older, ubiquitous, standard for sharing those time updates is called the Network Time Protocol (NTP) but there is a newer, heavily marketed, alternative called the IEEE 1588 Precision Time Protocol (PTP). The open source NTPd will consume and produce NTP packets while there are several open source variants of PTPd that do the same for PTP packets. TimeKeeper can work with both protocols – and with multiple PTP “profiles” – and   also inter-operates with the open source implementations.

PTP limits and fault-detection and recovery

TimeKeeper is protocol agnostic – we provide sub-microsecond clock synchronization, fault-tolerance, scalability, and process documentation with either PTP (in several versions) or NTP. Each protocol has different advantages in different situations, and some problems are best solved above the network protocol layer – transparently to applications. And PTP is a standard that needs a lot of help in the enterprise. Perhaps the most important example is that PTP “best master clock” protocol makes fault-detection difficult or impossible and can easily put users in a state of violating existing regulations. The IMC developers discovered this problem mid-project and found it to be costly and difficult to meet regulatory and audit requirements.

The implications of this aspect of the PTP protocol only became clear after PTP was deployed worldwide, using multiple GMs to service thousands of clients. On several occasions, a GM bug caused the (single) time source to send time without leap seconds information, for two (!) hours. As the active GM continued to send “Announce” packets as normal, with the same BMC values (priority, clock class, etc.), the inactive GMs had no reason to take over. All clients, however, saw a 34 second offset without any indication that this time might be invalid or suspicious in any way. Consequently, they either “corrected” this situation by stepping (jumping) the clock backwards, or by slewing (slowing down) the clock at maximum speed to the “new” value. Both cases are unacceptable in our regulatory environment (namely FINRA rule 7430 [3])

TimeKeeper fixes this problem above the protocol layer. Sophisticated filtering methods dispose of bad updates, cross checks of time sources detect failing or compromised sources and there are configurable triggers of when to fail over to a backup source. TimeKeeper can also notify automated monitoring software and network operators when it sees problems.  The number of independent sources monitored by TimeKeeper is essentially arbitrary: some customers have five sources to provide a very high level of reliability. Because we solve the unreliable source problem above the protocol layer, a TimeKeeper client can use PTP and NTP sources to cross-check each other. That is TimeKeeper has the property that the IMC developers eventually proposed as a future direction for PTP:

[and so, in the future]  Therefore, the PTP client needs a heuristic to judge the ’trustworthiness’ of its current GM, as compared to the group, and to be able to switch when needed


TimeKeeper, as illustrated in the graphic above can track many sources and cross-check them. But there is currently no notion of independent clocks within the PTP standard so the IMC developers were constrained to less effective, higher overhead solutions:

These multiple issues were first controlled by deprecating clock stepping, and later by deprecating long (i.e., more than 2 seconds) slews. At the same time, the specific issues were addressed in cooperation with the vendor [the hardware vendors –vy]. In addition, the identified problems resulted in multiple improvements to the configuration of the production time infrastructure and the network itself, especially on the multicast-forwarding configuration.

 While all these measures have objectively improved the situation, they are still insufficient to cover all known and unknown corner cases, as the slave clocks may still drift for long periods of time (at best), or be slowly slewed to any desired offset by a faulty, hacked or GPS-spoofed server (at worst). In all cases, this susceptibility introduces a significant operational overhead to keep the time synchronization system operating and monitored 24 hours/7 days a week.

Managers who choose to use the open source PTPd software in place of TimeKeeper are unknowingly betting that in the future PTP will develop some technology for addressing the problem of unreliable sources and that this future will materialize before their firms encounter a significant embarrassment.

A second look at PTP

The PTP standard has some advantages, but it is essential for technical management to understand the limitations of PTP which was originally intended to synchronize clocks on simple local area networks (LANs) that could not be more different than what we have in the enterprise. The protocol designers wanted to be able to synchronize clocks of control devices – sensors and actuators – connected to a single Ethernet network cable. They specified that the reference time sources should “multicast” time updates, essentially sending out time update packets that would be seen by all the devices on the cable. They located most of the intelligence in the time source – the grandly named GrandMaster – because they assumed the clients were simple control devices.  As the IMC developers put it:

Acknowledging that PTP was initially designed as a LAN protocol[…], the issues can be broadly divided into a) issues on the PTPv2 standard itself, b) issues that have to be addressed when PTP is expanded to work over WANs, and c) issues that caused the biggest operational impact on the (tested) implementations.

Now compare this target network with, say, IMCs network, which is similar to many networks in financial trading and other enterprises.

IMC has built and continuously maintains a state-of-the-art technological infrastructure that keeps us competitive. In particular, IMC features a global network that directly connects to over 40 exchanges worldwide, in all major financial locations and spanning all time zones. This network has dozens of datacenters (DCs), either co-located or in proximity to the financial exchanges, all with state-of-the-art switching backbones and inter-connected with a variety of both leased and partially shared high-speed interconnection lines. Together, all these DCs host thousands of servers, most of them performing critical real-time activities and all of them requiring strict traceability of their clocks to UTC for both current (e.g.,[3]) and anticipated regulatory/compliance reasons, for risk mitigation and for internal performance testing.

IMCs network was, before the project outlined in this paper, already set up to synchronize clocks using the older NTP protocol – which fits in the enterprise more easily.  Because the widely used open source NTP implementation software (which is also used inside of most network clock appliances) cannot deliver the time accuracy required in modern transaction systems, it is sometimes claimed that the protocol is limited in accuracy. This is false. TimeKeeper can easily deliver sub-microsecond accuracy over NTP. In fact, the two protocols are quite similar – the biggest difference is that there is hardware support for PTP in some networking equipment. However, there is also hardware support for NTP in some widely available network equipment and excellent results are routinely produced by TimeKeeper in this instance too.

When firms like IMC use TimeKeeper to upgrade clock synchronization, they can stay with NTP or gradually migrate or use a mixture of NPT and PTP. The IMC paper describes a number of issues that could motivate network managers might to be cautious about a migration to PTP. For example consider this note about the PTP multi-cast specification.

Requiring a single multicast address and group caused severe operational problems for IMC, as it requires all hosts to be both senders and receivers of a single shared WAN multicast group (with the clock separation methods described in section III.B). This requires an “all-to-all” semantic that is, in practice, far more complex to build and maintain than the regular “one-to-many” multicast semantics, primarily because of asymmetric routing issue

Ironically, one of the methods the IMC developers used to try to control the effects of PTP multicast is to embrace one of the many non-standard “hybrid modes”  that have proliferated in PTP implementation.  With this mode, PTP acts a lot like NTP, partially abandoning the multicast base of PTP to use a unicast handshake that is identical to the base NTP update transaction.


Ultimately, infrastructure and network managers need to look at the business logic of their time synchronization requirements and then examine the cost/benefit tradeoffs of different possible solutions. For some, the absence of software licensing costs for PTPd variants will outweigh the higher engineering investment, weaker technology, and commitment to discuss what might be considered proprietary issues on the open source development mailing lists. The IMC paper provides some documentation on the results that can be expected.

For more information contact [email protected]


[IMC] Challenges deploying PTPv2 in a Global Financial Company. Pedro V Estrela, Jan L. Bonebakker, 2012 International IEEE Symposium on Precision Clock Synchronization for Measurement, Control and Communication

Time Maps in Action

time map2This shows a time map for a client (center) that is getting NTP time from a variety of standard internet NTP servers and via a customer site (to the left). A TimeKeeper Pocket GrandMaster (bottom left) is serving multiple Stratum servers within the Customer network and we are connecting our server to use those as part of our fault-tolerant multi-source configuration.