In this third and final part to our series Uncovering Time in the Financial Markets we’ll look at clock synchronization techniques for improving the quality of time in the distributed systems that power the trade-lifecycle in the financial markets.
Previously I’ve shown how regulators and business strategy in the financial markets are more sensitive than ever to small intervals of time and why the inherent inaccuracy of time in the trade-lifecycle makes any temporal references potentially misleading, not to mention marketing’s blatant misuse of time in justifying advantages over competitor offerings.
Time, for our purposes, is the production of a clock that measures changes of a natural phenomenon or of an artificial machine according to the rules of the time standard it is meant to implement. We’re accustomed to dealing with the Mean Time, or Civil Time standard, which is based on the earth’s rotation around the sun. Because of variability resulting from the elliptical nature of this rotation, leap years are required to adjust for the natural clock-drift that occurs. Specifically we deal with the International Atomic Time (TAI) and Coordinated Universal Time (UTC) standards with UTC simply calculated by adding leap seconds to TAI time.
Atomic Clocks, which rely on atomic resonance of a cesium 133 atom for example, have become the standard for accurate time. In 1967, the 13th General Conference on Weights and Measures defined the International System (SI) unit of time, the second, in terms of atomic time rather than the motion of the Earth. A second was defined as:
The duration of 9,192,631,770 cycles of microwave light absorbed or emitted by the hyperfine transition of cesium-133 atoms in their ground state undisturbed by external fields.
It turns out that TAI time is based on atomic time and calculated by computing a weighted average of time kept by roughly 300 atomic clocks in over 50 national laboratories around the world. Many of these atomic clocks are cesium clocks.
When software running on a single computer requires a precise version of the current time it does so by calling the appropriate operating system function such as
gettimeofday on linux or the precise
QueryPerformanceCounter and less precise
GetTickCounts on windows. The values returned by these functions are based on the system’s local oscillator which updates the clock counter at a frequency known as the tick rate. This tick rate determines the precision (i.e. resolution), of time. Windows, for example, allows users to query the tick rate via the QueryPerformanceFrequency function (Note: requires support from underlying hardware). A tick rate of 1,000,000 updates a second, for example, allows the clock to support microsecond precision. One challenge for hardware engineers is setting a tick rate where the accuracy of the clock can be maintained without overloading the system with the tick events themselves.
There are numerous factors that cause variations in the the frequency of oscillation including age of the hardware components, system load, and temperature. These variations are called jitter and jitter leads to clock drift which results in inaccurate timings.
The Financial Industry Regulatory Authority FINRA (formerly NASD) devised Rule 6953 to address the need for accurate time in their Order Audit Trail System (OATS). The rule imposed clock synchronization requirements by stating:
Rule 6953 requires any FINRA member firm that records order, transaction or related data to synchronize all business clocks used to record the date and time of any market event. Clocks, including computer system clocks and manual timestamp machines, must record time in hours, minutes and seconds with to-the-second granularity and must be synchronized to a source that is synchronized to within three seconds of the National Institute of Standards’ (NIST) atomic clock. Clocks must be synchronized once a day prior to the opening of the market, and remain in synch throughout the day. In addition, firms are to maintain a copy of their clock synchronization procedures on-site. Clocks not used to record the date and time of market events need not be synchronized.
The rule is written so it addresses the requirement for accuracy and precision of time in an inherently distributed system like OATS as well as addressing the inaccuracy that can result from clock synchronization itself. Like the jitter I described in a system’s local oscillator, the propagation delay of the clock synchronization signal can also cause jitter.
Here we hit upon the double-edged inaccuracy of distributed time. First, the local system clock will drift (a.k.a clock drift) compared to the other clocks, necessitating each clock’s synchronization to a shared accurate time source. Second, each clock in the synchronization scheme will experience varying propagation delays (a.k.a clock skew) with this time source, potentially resulting in more inaccuracy between clocks.
A high-quality clock synchronization solution will ensure the accuracy for each node being synchronized by providing a reference source for actual time and disciplining each node’s local clock to be synchronized to this time.
Network Time Protocol
Network Time Protocol is a common clock synchronization protocol standard used on packet switched networks. It currently stands at version 4.
Check back soon as we show how standard implementations of the Network Time Protocol, handle drift and jitter to synchronize the clocks in the machines that power the trade lifecycle.