Low latency (capital markets)

Low latency is a topic within capital markets, where the proliferation of algorithmic trading requires firms to react to market events faster than the competition to increase profitability of trades. For example, when executing arbitrage strategies the opportunity to “arb” the market may only present itself for a few milliseconds before parity is achieved. To demonstrate the value that clients put on latency, a large global investment bank has stated that every millisecond lost results in $100m per annum in lost opportunity.[1]

What is considered “low” is therefore relative but also a self-fulfilling prophecy. Many organisations are using the words “ultra low latency” to describe latencies of under 1 millisecond, but really what is considered low today will no doubt be considered unacceptable in a few years' time.

There are many factors which impact on the time it takes a trading system to detect an opportunity and to successfully exploit that opportunity, including:

From a networking perspective, the speed of light "c" dictates one theoretical latency limit: a trading engine just 150 km (93 miles) down the road from the exchange can never achieve better than 1ms return times to the exchange before one even considers the internal latency of the exchange and the trading system. This theoretical limit assumes light is travelling in a straight line in a vacuum which in practise is unlikely to happen: Firstly achieving and maintaining a vacuum over a long distance is difficult and secondly, light cannot easily be beamed and received over long distances due to many factors, including the curvature of the earth, interference by particles in the air, etc. Light travelling within dark fibre cables does not travel at the speed of light - "c" - since there is no vacuum and the light is constantly reflected off the walls of the cable, lengthening the effective path travelled in comparison to the length of the cable and hence slowing it down . There are also in practice several routers, switches, other cable links and protocol changes between an exchange and a trading system. As a result, most low latency trading engines will be found physically close to the exchanges, even in the same building as the exchange (co-location) to further reduce latency.

A crucial factor in determining the latency of a data channel is its throughput. Data rates are increasing exponentially which has a direct relation to the speed at which messages can be processed. Also, low-latency systems need not only to be able to get a message from A to B as quickly as possible, but also need to be able to process millions of messages per second. See comparison of latency and throughput for a more in-depth discussion.

Where latency occurs

Latency from event to execution

When talking about latency in the context of capital markets, consider the round trip between event and trade:

We also need to consider how latency is assembled in this chain of events:

There are a series of steps that contribute to the total latency of a trade:

Event occurrence to being on the wire

The systems at a particular venue need to handle events, such as order placement, and get them onto the wire as quickly as possible to be competitive within the market place. Some venues offer premium services for clients needing the quickest solutions.

Exchange to application

This is one of the areas where most delay can be added, due to the distances involved, amount of processing by internal routing engines, hand off between different networks and the sheer amount of data which is being sent, received and processed from various data venues.

Latency is largely a function of the speed of light, which is 299,792,458 metres/second in a scientifically controlled environment; this would equate to a latency of 3 microseconds for every kilometre. However, when measuring latency of data we need to account for the fiber optic cable. Although it seems "pure", it is not a vacuum and therefore refraction of light needs to be accounted for. For measuring latency in long-haul networks, the calculated latency is actually 4.9 microseconds for every kilometre. In shorter metro networks, the latency performance rises a bit more due to building risers and cross-connects that can make the latency as high as 5 microseconds per kilometre.

It follows that to calculate latency of a connection, one needs to know the full distance travelled by the fiber, which is rarely a straight line, since it has to traverse geographic contours and obstacles, such as roads and railway tracks, as well as other rights-of-way.

Due to imperfections in the fiber, light degrades as it is transmitted through it. For distances greater than 100 kilometres, either amplifiers or regenerators need to be deployed. Accepted wisdom has it that amplifiers add less latency than regenerators, though in both cases the added latency can be highly variable, which needs to be taken into account. In particular, legacy spans are more likely to make use of higher latency regenerators.

Application decision making

This area doesn't strictly belong under the umbrella of “low-latency”, rather it is the ability of the trading firm to take advantage of High Performance Computing technologies to process data quickly. However, it is included for completeness.

Sending the order to the venue

As with delays between Exchange and Application, many trades will involve a brokerage firm's systems. The competitiveness of the brokerage firm in many cases is directly related to the performance of their order placement and management systems.

Order execution

The amount of time it takes for the execution venue to process and match the order.

Latency Measurement

Terminology

Average latency

Average latency is the mean average time for a message to be passed from one point to another - the lower the better. Times under 1 millisecond are typical for a market data system.

Latency Jitter

There are many use cases where predictability of latency in message delivery is just as important, if not more important than achieving a low average latency. This latency predictability is also referred to as "Low Latency Jitter" and describes a deviation of latencies around the mean latency measurement.

Throughput

Throughput can be defined as amount of data processed per unit of time. Throughput refers to the number of messages being received, sent and processed by the system and is usually measured in updates per second. Throughput has a correlation to latency measurements and typically as the message rate increases so do the latency figures. To give an indication of the number of messages we are dealing with the “Options Price Reporting Authority” (OPRA) is predicting peak message rates of 907,000 updates per second (ups) on its network by July 2008.[2] This is just a single venue – most firms will be taking updates from several venues.

Testing procedure nuances

Timestamping/Clocks

Clock accuracy is paramount when testing the latency between systems. Any discrepancies will give inaccurate results. Many tests involve locating the publishing node and the receiving node on the same machine to ensure the same clock time is being used. This isn’t always possible however, so clocks on different machines need to be kept in sync using some sort of time protocol:

Reducing latency in the order chain

Reducing latency in the order chain involves attacking the problem from many angles. Amdahl's Law, commonly used to calculate performance gains of throwing more CPUs at a problem, can be applied more generally to improving latency – that is, improving a portion of a system which is already fairly inconsequential (with respect to latency) will result in minimal improvement in the overall performance.

See also

References

This article is issued from Wikipedia - version of the Friday, April 29, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.