HFT Strategies the Same Regardless of Data Feed Variety or Speeds

The Issue:

Former SEC Chairman Mary Jo White has said that a “fairness concern” is the latency difference between the direct data feeds and the consolidated feeds. Some have argued that the lag between the speed in which direct feeds and slower consolidated feeds deliver refreshed stock prices could lead to an unfair advantage for those with the faster feeds. Chairman White suggests exchanges could “include affirmative or negative trading obligations for high-frequency trading firms that employ the fastest, most sophisticated trading tools. Such obligations would be analogous to the ones that historically applied to the proprietary traders with time and place advantages on manual trading floors.”

MMI’s Stance:

If the consolidated feed were replaced with direct feeds for all, it would not affect the strategies of our Member firms. In fact, our Members trade successfully in asset classes with only a single data feed, such as the U.S. Treasury market and the Chicago Mercantile Exchange here in the U.S., and in multiple geographies where there is also only a single data feed.

Speed up the SIP

MMI Member Guest Editorial: Speed up the SIP
MMI Member Adam Nunes writes that a better way to manage access and distribution of basic stock market trading data (SIP), is to offer competing models by allowing: 1) firms to order the SIP data as they do today, by contacting their vendor or the SIP administrator, 2) the firm/vendor connecting to the SIP would get a connection to each exchange to listen to their data where the data is produced (rather than getting the data from a central location) and 3) the firm would receive and process the data similarly to how it handles direct market data feeds.  Unleashing these competitive forces, he reasons, would be an important first step in improving investors’ access to market data.


By Adam Nunes, Head of Business Development, Hudson River Trading

For a while now, there has been a lot of debate and handwringing about the relative slowness of the Securities Information Processor (“SIP”).  It has been identified as a single point of failure and been blamed for creating a two-speed marketplace since SIP data moves slower than exchange direct-feed data.  Nasdaq and NYSE have been working closely with the industry to ensure the resiliency of the SIP, but the SIP is still not as fast as direct feeds.  It’s time to take the discussion a few steps ahead and enact concrete proposals to fundamentally update this critical piece of market infrastructure mandated by regulatory directives such as Reg NMS and the vendor display rule.

Last year, BlackRock wrote a letter to the Securities Exchange Commission (SEC) suggesting “exchanges should make the necessary investments in technology to reduce the latency between the SIP and private data feeds to market acceptable standards.” Two months prior, SIFMA recommended “the central SIP structure should be eliminated and replaced with commercially competitive Market Data Aggregators.” And nearly 15 years ago, the SEC’s venerable Seligman Commissionrecommended “competing consolidators to evolve from the current unitary consolidator model.”

I believe these goals can be reached with a simple change in the technical structure of the SIP that would eliminate the latency difference between direct market data feeds and the SIP, remove the SIP as a single point of failure and create a platform that makes the market better for all investors.

Here is how the SIP currently works for a firm trading NYSE stocks on Nasdaq using the SIP data:
• Nasdaq receives an order to trade an NYSE stock in its data center from a dealer in their very same data center
• That order leads to an update in that stock’s quote that must be reported to the NYSE SIP in NYSE’s data center
• Nasdaq sends the update to the NYSE SIP in a data center several miles away
• The NYSE SIP processes the update
• The NYSE SIP sends the updated information back to the trading firm in the Nasdaq data center
• The trading firm in the Nasdaq data center can then process and use the data

If this strikes you as extremely inefficient it’s because it is. It’s like if you’ve ever spoken to someone on a cell phone when you are close enough to talk to them in person. The lag of your voice and theirs pinging a cell tower and returning back to your phones is much less efficient than just communicating directly. Think of your live voice in earshot is the way direct feeds are disseminated, the cell phone lag is how the SIP is disseminated. It doesn’t have to be this way.

A better method is the “competing consolidator” model. Under this proposal: 1) firms would order the SIP data as they do today, by contacting their vendor or the SIP administrator, 2) the firm/vendor connecting to the SIP would get a connection to each exchange to listen to their data where the data is produced (rather than getting the data from a central location) and 3) the firm would receive and process the data similarly to how it handles direct market data feeds.

A minority of members on the Seligman committee noted the technical challenges associated to moving to this model. But given the technical advances in the 15 years since, the challenges have been solved. And surely there would be a host of vendors competing to consolidate the data much quicker than the current SIP model. By moving to this more open architecture model, we could substantially improve the performance of the SIP and introduce competition into the process. In addition, because there would be multiple consolidators using different code bases, we would remove the single point of failure that exists with the current SIP structure.

Instead of focusing on just increasing the speed of the SIP, I think it is time to consider changing its structure to be quicker, more resilient and better positioned to deliver data that is critical to the efficiency of the securities markets. In the process, it would alleviate the concentration risk for a processor solely responsible for all NASDAQ stock quotes and a separate processor solely responsible for all NYSE and Tape B quotes and help bolster investor confidence in the markets. Unleashing these competitive forces would be an important first step in improving investors’ access to market data.

How Slow Is the NBBO? A Comparison with Direct Exchange Feeds

How Slow Is the NBBO? A Comparison with Direct Exchange Feeds
Shengwei Ding of Wells Fargo Securities and Professors John Hanna and Terrence Hendershott of the University of California find the short duration of dislocations makes their costs small for investors who trade infrequently, while the frequency of the dislocations makes them costly for frequent traders.

Co-location and Direct Market Data Feeds Promote Efficiency and Transparency

Co-location and Direct Market Data Feeds Promote Efficiency and Transparency
MMI blog post that discusses how colocation (the practice of hosting a firm’s computers in the same facility as the exchange in order to reduce the time it takes to communicate with the exchange) and direct market data feeds are a crucial component to the efficiency of today’s modern markets.