Skip to main content

The Drive to Zero Latency

By October 3, 2011Article

Last year Deutsche Bourse, one of the world’s leading financial exchanges in Frankfurt, Germany, began developing a new ultra low-latency trading infrastructure linking Frankfurt to London, Paris, Amsterdam, New York and Chicago. The target for the Frankfurt-London link was 5 milliseconds (1000th of a second), for Amsterdam 3.3 milliseconds, for Paris 4.5 milliseconds, for New York 40 milliseconds, and for Chicago 49 milliseconds.
In today’s new financial landscape, those milliseconds aren’t fast enough. The need for speedier operations is just one of the reasons why the Deutsche Bourse is pursuing a merger with the New York Stock Exchange Euronet and the London Stock Exchange and with Canada’s TMX Group.
Lowering the speed at which trades can be completed – referred to as latency – is a way to find an edge in stock exchanges and in the highly competitive financial services industry as a whole. Zero latency – i.e., trading at the speed of light – is the Holy Grail of this effort. No one in the industry can afford to ignore even the tiniest steps taken in this direction. Since more than two-thirds of all trades in the United States today are conducted at low latencies, the faster a trader can get from point A to point B will be a winner.
It’s microseconds (one millionth of a second) that count now more than milliseconds. The talk in the blogosphere is even of nanoseconds (one billionth of a second) now to push the common metric that a millisecond advantage can be worth $100 million a year to a major brokerage firm.
Over-achievers in this world make money by taking advantage of fluctuating trade prices lasting just a few microseconds. Identifying and acting on those opportunities faster than competitors separates winners from losers. It’s the job of the trading network.
Immediately when Deutsche Bourse announced its new ultra-latency system, competitors looked for ways to match or beat it. One of the quickest solutions they came up with was to install specialized hardware. But replacing hardware can be like rebuilding an old car. You put in a fast engine to get it up to speed, but the same old gears still drive the chassis. The engine is perfect, only it’s bolted onto a chassis that is unable to keep up with the new pace.
A popular alternative is proximity trading by co-locating and installing a trader’s computer services near a market center to match its computers and drive low-latency access by proximity.
However, to stay ahead as competitors grab these “low-hanging fruit” solutions, the pacesetters in the financial services industry’s race to zero latency are reconciling how they think about technology. They have begun to do so by evaluating solutions that analyze the avalanches of data passing through their systems so that it automatically goes where it is most valued along the trade cycle, all the way from pre-trade analysis, to trading, post-trade analysis and to settlement and accounting. The focus is on analytics that speed the access to data that is specifically valuable to each section of their businesses while at the same time eliminating what is irrelevant to give a single view of the state of the market.
Next-generation information platform
Since the financial services industry has already exploited the low-hanging fruit, they are now exploring a new technology landscape that will enable them to to deal with the challenges of Big Data and extreme transaction volumes. The infrastructure to do this can be classified as the Next Generation Information Platform (NGIP). At the core of its architecture is the fabric to enable “distributed scale out” of high frequency with ultra low-latency applications deployed within an in-memory centric landscape.
The cloud is the NGIP business model, but financial services organizations will avoid the typical public cloud environment. They will prefer their own internal cloud. Moreover, an abstraction will be necessary to allow applications to scale out while preserving the integrity of existing data and transactions. Public clouds typically provide an open-source set of tools and technologies that enable “brute force” Massive Parallel Processing of Big Data, the traditional data management vendors are being challenged by the so-called NoSQL alternatives to run certain classes of applications. The NGIP seeks to harness existing investments and most importantly not to compromise existing applications and where appropriate build on the foundations of the new distributed computing frameworks like Hadoop.
In addition, NGIP will enable the adoption of internal cloud computing among the 60 percent of capital markets firms that have not yet leveraged its benefits to elastically scale up and down IT infrastructure as business needs change. Driving adoption of the NGIP is the value it adds without upsetting existing IT functions.
NGIP benefits
As massive data acquisition and storage becomes increasingly affordable, a wide variety of enterprises are employing statisticians to engage in sophisticated data analysis. In a research paper published by Joseph M. Hellerstein (UC Berkley), Hellerstein and his fellow researchers highlighted the emerging practice of Magnetic, Agile, Deep (MAD) data analysis as a radical departure from traditional enterprise data warehouses and business intelligence. A key attribute in the support of (MAD), are the specific characteristics of the underlining repositories. As a software stack, NGIP will capitalize on innovations in the latest advances in hardware, for example, higher processor core-counts, larger memory footprints, and faster wired and wireless interconnections.
NGIP incorporates technologies that manage the life cycle of Big Data. These technologies have the ability to consume vast amounts of heterogeneous data in a scalable repository that exhibits properties (like a magnet) that attract rather than repel data.
Once loaded, the users can intelligently associate native algorithmic functions like time series to exploit the historical context of data. Traditionally, a repository stores data accessible from a client application that typically substantiates at least some data to process within the application. The notion of applying an algorithmic function, such as time series, to data natively in the store without having to substantiate large sets of data radically improves latency overheads. This in-database analytics will cut the latency on trading and take market access times down to the nanosecond range.
In the example of a single trade that typically is stored and decays over time, MAD on any given day can push to relocate that trade to provide a context from which to make forecasts or to create profiles.
Several factors – including the global economic downturn, commoditization of low latency, and growing sophistication of asset management firms – have brought hedge funds to use trading strategies based on event data. According to a recent Automated Trader survey, 21 percent of U.S. trading firms now use news and event data feeds to make decisions quicker.
Breaking news events about corporate data, price movements, streaming data, the Web click stream, the unstructured data created by the 43.5 million people in the United States in November alone who sent e-mails on their mobile devices almost every day, are essential to creating and tracking a market today. A passive traditional database will not record any of that activity. It will not even know it happened.
The key to driving value through the NGIP
Organizations that employ the NGIP in their systems will derive incredible value from their data. The key is to bring the data closer to the application tier and the users while allowing computations in the cloud so that transactions and analytics occur quicker and more efficiently than previously.
The NGIP abstracts the complex interaction patterns between system components or services to maintain the drive to zero latency that the growing uptake of event-driven processing seeks. With the NGIP, companies have the best way to harness complexity when traditional systems are intermixed with event-driven publish-subscribe processing.
The NGIP’s ability to provide contextual reference data is a significant answer to traders’ attempts to wring profit out of a more competitive and complex trading environment while optimizing the drive to zero latency.
Everything matters, and getting the best from what you have is critical.
Irfan Khan is Senior Vice President and Chief Technology Officer at Sybase. He oversees all technology offices in each of Sybase’s business units, ensuring market needs and customer aspirations are reflected within the company’s innovation and product development. He is also responsible for setting the architecture and technology direction for the worldwide technical sales organization. As part of his CTO responsibilities, he is responsible for seeding new innovation and driving new technologies. Mr. Khan also is in charge of the Sybase Developer Network. In 2010, he received the InfoWorld CTO Top 25 Award and was named to the International Advisory Board of Cloud Expo.

Copy link
Powered by Social Snap