These findings suggest that while there is an arms race in speed, the arms race does not actually eliminate the arbitrage opportunities; rather, it just continually raises the bar for capturing them. A complementary finding, in the correlation breakdown analysis, is that the number of milliseconds necessary for economically meaningful correlations to emerge has been steadily decreasing over the time period 2005-2011; but, in all years, market correlations are essentially zero at high-enough frequency. Overall, our analysis suggests that the prize in the arms race should be thought of more as a mechanical “constant” of the continuous limit order book market design, rather than as an inefficiency that is competed away over time. The High-Frequency Trading Arms Race: Frequent Batch
Auctions as a Market Design Response∗
The article quoted above has received a lot of attention for its economics critique of high frequency trading (HFT). The main point is that there are arbitrage opportunities between exchanges that emerge only at extreme high speeds. For example, Chicago E-mini Future (ES) and iShares SPDR S&P 500 (SPY) are both S&P 500 index funds and correlate tightly over day, hour, and minute, but not at the millisecond level. Consequently, traders equipped with fast enough trading technology can take advantage of “artificial” arbitrage opportunities by maybe spotting a move up on one exchange and buying on the other before the longer time scale correlation kicks in. I’m not completely convinced since there are differences between the two assets – such as significant liquidity difference that might be exposed at small time scales. But that’s not my concern here, my concern is the technology of the proposed remedy.
Stop that speeder!
The authors propose that exchanges replace the current “continuous” model with frequent batch auctions. The continuous model allows submission of limit orders (bids and asks etc) at any time with as-soon-as-possible execution. The batch alternative would essentially stockpile orders until a timer expires and then conduct a kind of auction, processing all possible orders at a price determined by matching orders. The explanation is below:
First, frequent batch auctions give exchange computers a discrete period of time to process current orders before the next batch of orders needs to be dealt with. This simplifies the exchange’s computational task, perhaps making markets less vulnerable to incidents like the August 2013 NASDAQ outage (Bunge, Strasburg and Patterson, 2013), and also prevents order backlog and incorrect time stamps, issues that were salient during the Facebook IPO and the Flash Crash (Strasburg and Bunge, 2013; Nanex, 2011). In a sense, the continuous limit order book design implicitly assumes that exchange computers are infinitely fast; computers are fast, but not infinitely so.
There are three technical issues that are passed over too lightly here. The first is that batch processing replaces a soft-real-time schedule with a hard real-time schedule, which makes the whole system more fragile. In the current system, an exchange that cannot keep up with a rush of orders slows down. Queues get longer – perhaps giving traders incentive to cancel orders. Latency increases. Traders who are not properly synchronizing time are flying blind without an ability to detect and document small changes in latency but this technology is readily available ( email [email protected] and we can fix you up). The batch system, however, commits the exchanges to hard-real-time deadlines. Every single submitted order must be processed within the batch interval and there must be still time to produce and transmit a report on the result. So the implicit claim that the batch process resolves the backlog issue and other issues resulting from limited processing speed seems to me to require some more explanation at the very least. What happens if a batch deadline is missed? In the meantime new orders are accumulating for the next batch. Should the exchange abort the late batch or combine it with the waiting one or simply push the timer on the next batch deadline forward? Notice that if batch intervals are say 1 second, a failure to complete batch T1 in one second that delays T2 will mean that orders for T2 accumulate for more than a second. If the heavy order load is not limited to just one period, T2 will have less chance of completing on time, so we start to cascade deadline failures. There may be some way around this, but design of real-time systems must include analysis of what happens when deadlines are missed.
The second technical problem is that the timestamp failures the authors point out have nothing to do with continuous versus batch processing and everything to do with poor time synchronization technology. At the risk of repeating myself, please email [email protected] if you want to solve the problem. We can help you out.
Finally, with multiple exchanges and non-synchronized batches, the fast traders advantage reappears and may even become more pronounced. Suppose exchange A, B, and C all march along, second by second. Enterprising traders will look for arbitrage possibilities between exchanges as auctions complete out-of-sync unless we’re going to mandate a cross exchange timer, which has its own issues (and makes missed deadlines even more interesting).
Complex topic and perhaps I misunderstood. Leave a comment or tweet to @vyodaiken.