• 沒有找到結果。

Does Algorithmic Trading Improve Liquidity? TERRENCE HENDERSHOTT, CHARLES M. JONES, and ALBERT J. MENKVELD

N/A
N/A
Protected

Academic year: 2022

Share "Does Algorithmic Trading Improve Liquidity? TERRENCE HENDERSHOTT, CHARLES M. JONES, and ALBERT J. MENKVELD"

Copied!
33
0
0

加載中.... (立即查看全文)

全文

(1)

Does Algorithmic Trading Improve Liquidity?

TERRENCE HENDERSHOTT, CHARLES M. JONES, and ALBERT J. MENKVELD

ABSTRACT

Algorithmic trading (AT) has increased sharply over the past decade. Does it improve market quality, and should it be encouraged? We provide the first analysis of this question. The New York Stock Exchange automated quote dissemination in 2003, and we use this change in market structure that increases AT as an exogenous instrument to measure the causal effect of AT on liquidity. For large stocks in particular, AT narrows spreads, reduces adverse selection, and reduces trade-related price discovery.

The findings indicate that AT improves liquidity and enhances the informativeness of quotes.

TECHNOLOGICAL CHANGE HAS REVOLUTIONIZED the way financial assets are traded. Every step of the trading process, from order entry to trading venue to back office, is now highly automated, dramatically reducing the costs incurred by intermediaries. By reducing the frictions and costs of trading, technology has the potential to enable more efficient risk sharing, facilitate hedging, im- prove liquidity, and make prices more efficient. This could ultimately reduce firms’ cost of capital.

Algorithmic trading (AT) is a dramatic example of this far-reaching techno- logical change. Many market participants now employ AT, commonly defined as the use of computer algorithms to automatically make certain trading de- cisions, submit orders, and manage those orders after submission. From a starting point near zero in the mid-1990s, AT is thought to be responsible for as much as 73 percent of trading volume in the United States in 2009.1

There are many different algorithms, used by many different types of market participants. Some hedge funds and broker–dealers supply liquidity

Hendershott is at Haas School of Business, University of California Berkeley. Jones is at Columbia Business School. Menkveld is at VU University Amsterdam. We thank Mark van Achter, Hank Bessembinder, Bruno Biais, Alex Boulatov, Thierry Foucault, Maureen O’Hara, S´ebastien Pouget, Patrik Sandas, Kumar Venkataraman, the NASDAQ Economic Advisory Board, and sem- inar participants at the University of Amsterdam, Babson College, Bank of Canada, CFTC, HEC Paris, IDEI Toulouse, Southern Methodist University, University of Miami, the 2007 MTS Con- ference, NYSE, the 2008 NYU-Courant algorithmic trading conference, University of Utah, the 2008 Western Finance Association meetings, and Yale University. We thank the NYSE for provid- ing system order data. Hendershott gratefully acknowledges support from the National Science Foundation, the Net Institute, the Ewing Marion Kauffman Foundation, and the Lester Center for Entrepreneurship and Innovation at the Haas School at UC Berkeley. Menkveld gratefully acknowledges the College van Bestuur of VU University Amsterdam for a VU talent grant.

1See “SEC runs eye over high-speed trading,” Financial Times, July 29, 2009. The 73% is an estimate for high-frequency trading, which, as discussed later, is a subset of AT.

1

(2)

2 The Journal of FinanceR

using algorithms, competing with designated market-makers and other liquid- ity suppliers (e.g., Jovanovic and Menkveld (2010)). For assets that trade on multiple venues, liquidity demanders often use smart order routers to deter- mine where to send an order (e.g., Foucault and Menkveld (2008)). Statistical arbitrage funds use computers to quickly process large amounts of information contained in the order flow and price moves in various securities, trading at high frequency based on patterns in the data. Last but not least, algorithms are used by institutional investors to trade large quantities of stock gradually over time.

Before AT took hold, a pension fund manager who wanted to buy 30,000 shares of IBM might hire a broker-dealer to search for a counterparty to execute the entire quantity at once in a block trade. Alternatively, that institutional investor might have hired a New York Stock Exchange (NYSE) floor broker to go stand at the IBM post and quietly “work” the order, using his judgment and discretion to buy a little bit here and there over the course of the trading day to keep from driving the IBM share price up too far. As trading became more electronic, it became easier and cheaper to replicate that floor trader with a computer program doing AT (see Hendershott and Moulton (2009) for evidence on the decline in NYSE floor broker activity).

Now virtually every large broker-dealer offers a suite of algorithms to its institutional customers to help them execute orders in a single stock, in pairs of stocks, or in baskets of stocks. Algorithms typically determine the timing, price, quantity, and routing of orders, dynamically monitoring market condi- tions across different securities and trading venues, reducing market impact by optimally and sometimes randomly breaking large orders into smaller pieces, and closely tracking benchmarks such as the volume-weighted average price (VWAP) over the execution interval. As they pursue a desired position, these algorithms often use a mix of active and passive strategies, employing both limit orders and marketable orders. Thus, at times they function as liquidity demanders, and at times they supply liquidity.

Some observers use the term AT to refer only to the gradual accumulation or disposition of shares by institutions (e.g., Domowitz and Yegerman (2005)). In this paper, we take a broader view of AT, including in our definition all partici- pants who use algorithms to submit and cancel orders. We note that algorithms are also used by quantitative fund managers and others to determine portfolio holdings and formulate trading strategies, but we focus on the execution aspect of algorithms because our data reflect counts of actual orders submitted and cancelled.

The rise of AT has obvious direct impacts. For example, the intense activ- ity generated by algorithms threatens to overwhelm exchanges and market data providers,2 forcing significant upgrades to their infrastructures. But re- searchers, regulators, and policymakers should be keenly interested in the broader implications of this sea change in trading. Overall, does AT have

2See “Dodgy tickers-stock exchanges,” Economist, March 10, 2007.

(3)

salutary effects on market quality, and should it be encouraged? We provide the first empirical analysis of this question.

As AT has grown rapidly since the mid-1990s, liquidity in world equity mar- kets has also dramatically improved. Based on these two coincident trends, it is tempting to conclude that AT is at least partially responsible for the increase in liquidity. But it is not at all obvious a priori that AT and liquidity should be positively related. If algorithms are cheaper and/or better at supplying liq- uidity, then AT may result in more competition in liquidity provision, thereby lowering the cost of immediacy. However, the effects could go the other way if algorithms are used mainly to demand liquidity. Limit order submitters grant a trading option to others, and if algorithms make liquidity demanders bet- ter able to identify and pick off an in-the-money trading option, then the cost of providing the trading option increases, in which case spreads must widen to compensate. In fact, AT could actually lead to an unproductive arms race, where liquidity suppliers and liquidity demanders both invest in better algo- rithms to try to take advantage of the other side, with measured liquidity the unintended victim.

In this paper, we investigate the empirical relationship between AT and liq- uidity. We use a normalized measure of NYSE electronic message traffic as a proxy for AT. This message traffic includes electronic order submissions, cancel- lations, and trade reports. Because we normalize by trading volume, variation in our AT measure is driven mainly by variation in limit order submissions and cancellations. This means that, for the most part, our measure is picking up variation in algorithmic liquidity supply. This liquidity supply likely comes both from proprietary traders that are making markets algorithmically and from buy-side institutions that are submitting limit orders as part of “slice and dice” algorithms.

We first examine the growth of AT and the improvements in liquidity over a 5-year period. As AT grows, liquidity improves. However, while AT and liquidity move in the same direction, it is certainly possible that the relationship is not causal. To establish causality we study an important exogenous event that increases the amount of AT in some stocks but not others. In particular, we use the start of autoquoting on the NYSE as an instrument for AT. Previously, specialists were responsible for manually disseminating the inside quote. This was replaced in early 2003 by a new automated quote whenever there was a change to the NYSE limit order book. This market structure provides quicker feedback to algorithms and results in more electronic message traffic. Because the change was phased in for different stocks at different times, we can take advantage of this nonsynchronicity to cleanly identify causal effects.

We find that AT does in fact improve liquidity for large-cap stocks. Quoted and effective spreads narrow under autoquote. The narrower spreads are a result of a sharp decline in adverse selection, or equivalently a decrease in the amount of price discovery associated with trades. AT increases the amount of price discovery that occurs without trading, implying that quotes become more informative. There are no significant effects for smaller-cap stocks, but our instrument is weaker there, so the problem may be a lack of statistical power.

(4)

4 The Journal of FinanceR

Surprisingly, we find that AT increases realized spreads and other measures of liquidity supplier revenues. This is surprising because we initially expected that if AT improved liquidity, the mechanism would be competition between liquidity providers. However, the evidence clearly indicates that liquidity sup- pliers are capturing some of the surplus for themselves. The most natural explanation is that, at least during the introduction of autoquote, algorithms had market power. Over a longer time period liquidity supplier revenues de- cline, suggesting that any market power was temporary, perhaps because new algorithms require considerable investment and time to build.

The paper proceeds as follows. Section I discusses related literature. Sec- tion II describes our measures of liquidity and AT and discusses the need for an instrumental variables approach. Section III provides a summary of the NYSE’s staggered introduction of autoquote in 2003. Section IV examines the impact of AT on liquidity. Section V explores the sources of the liquidity im- provement. Section VI studies AT’s relation to price discovery via trading and quote updating. Section VII discusses and interprets the results, and Section VIII concludes.

I. Related Literature

Only a few papers address AT directly. For example, Engle et al. (2007) use execution data from Morgan Stanley algorithms to study the effects on trading costs of changing algorithm aggressiveness. Domowitz and Yegerman (2005) study execution costs for a set of buy-side institutions, comparing results from different algorithm providers. Chaboud et al. (2009) study AT in the foreign exchange market and focus on its relation to volatility, while Hendershott and Riordan (2009) measure the contributions of AT to price discovery on the Deutsche Boerse.

Several strands of literature touch related topics. Most models take the tra- ditional view that one set of traders provides liquidity via quotes or limit orders and another set of traders initiates a trade to take that liquidity—for either in- formational or liquidity/hedging reasons. Many assume that liquidity suppliers are perfectly competitive, for example, Glosten (1994). Glosten (1989) models a monopolistic liquidity supplier, while Biais et al. (2000) model competing liquidity suppliers and find that their rents decline as the number increases.

Our initial expectation is that AT facilitates the entry of additional liquidity suppliers, increasing competition.

The development and adoption of AT also involves strategic considerations.

While algorithms have low marginal costs, there may be substantial develop- ment costs, and it may be costly to optimize the algorithms’ parameters for each security. The need to recover these costs should lead to the adoption of AT at times and in securities where the returns to adoption are highest (see Reinganum (1989) for a review of innovation and technology adoption).

As we discuss briefly in the introduction, liquidity supply involves posting firm commitments to trade. These standing orders provide free trading op- tions to other traders. Using standard option pricing techniques, Copeland

(5)

and Galai (1983) value the cost of the option granted by liquidity suppliers.

Foucault et al. (2003) study the equilibrium level of effort that liquidity suppli- ers should expend in monitoring the market to reduce this option’s cost. Black (1995) proposes a new limit order type that is indexed to the overall market to minimize picking-off risk. Algorithms can efficiently implement this kind of monitoring and adjustment of limit orders.3If AT reduces the cost of the free trading option implicit in limit orders, then measures of adverse selection de- pend on AT. If some users of AT are better at avoiding being picked off, they can impose adverse selection costs on other liquidity suppliers as in Rock (1990) and even drive out other liquidity suppliers.

AT may also be used by traders who are trying to passively accumulate or liquidate a large position.4There are optimal dynamic execution strategies for such traders. For example, Bertsimas and Lo (1998) find that, in the presence of temporary price impacts and a trade completion deadline, orders are optimally broken into pieces so as to minimize cost.5 Many brokers incorporate such considerations into the AT products that they sell to their clients. In addition, algorithms monitor the state of the limit order book to dynamically adjust their trading strategies, for example, when to take and offer liquidity (Foucault et al.

(2010)).

II. Data

We start by characterizing the time-series evolution of AT and liquidity for a sample of NYSE stocks over the 5 years from February 2001 through December 2005. We limit attention to the post-decimalization regime because the change to a one-penny minimum tick was a structural break that substantially altered the entire trading landscape, including liquidity metrics and order submission strategies. We end in 2005 because substantial NYSE market structure changes occur shortly thereafter.

We start with a sample of all NYSE common stocks that can be matched in both the Trades and Quotes (TAQ) and Center for Research in Security Prices CRSP databases. To maintain a balanced panel, we retain those stocks that are present throughout the whole sample period. Stocks with an average share

3Rosu (2009) develops a model that implicitly recognizes these technological advances and simply assumes that limit orders can be constantly adjusted. Consistent with AT, Hasbrouck and Saar (2009) find that by 2004 a large number of limit orders are cancelled within two seconds on the INET trading platform.

4Keim and Madhavan (1995) and Chan and Lakonishok (1995) study institutional orders that are broken up.

5Almgren and Chriss (2000) extend this optimization problem by considering the risk that arises from breaking up orders and slowly executing them. Obizhaeva and Wang (2005) optimize assuming that liquidity does not replenish immediately after it is taken but only gradually over time. For each component of a larger transaction, a trader or algorithm must choose the type and aggressiveness of the order. Cohen et al. (1981) and Harris (1998) focus on the simplest static choice: market order versus limit order. However, a limit price must be chosen, and the problem is dynamic; Goettler et al. (2009) model both aspects.

(6)

6 The Journal of FinanceR

price of less than $5 are removed from the sample, as are stocks with an aver- age share price of more than $1,000. The resulting sample consists of monthly observations for 943 common stocks. The balanced panel eliminates composi- tional changes in the sample over time, which could induce some survivorship effects if disappearing stocks are less liquid. This could overstate time-series improvements in liquidity, although the same liquidity patterns are present without a survivorship requirement (see Comerton-Forde et al. (2010)).

Stocks are sorted into quintiles based on market capitalization. Quintile 1 refers to large-cap stocks and quintile 5 corresponds to small-cap stocks. All variables used in the analysis are 99.9 % winsorized: values smaller than the 0.05% quantile are set equal to that quantile, and values larger than the 99.95% quantile are set equal to that quantile.

A. Proxies for AT

We cannot directly observe whether a particular order is generated by a computer algorithm. For cost and speed reasons, most algorithms do not rely on human intermediaries but instead generate orders that are sent electronically to a trading venue. Thus, we use the rate of electronic message traffic as a proxy for the amount of AT taking place.6 This proxy is commonly used by market participants, including consultants Aite Group and Tabb Group, as well as exchanges and other market venues.7

For example, in discussing market venue capacity limits following an episode of heavy trading volume in February 2007, a Securities Industry News report quotes NASDAQ senior vice president of transaction services, Brian Hyndman, who noted that exchanges have dealt with massive increases in message traffic over the past 5 to 6 years, coinciding with algorithmic growth:

“It used to be one-to-one,” Hyndman said. “Then you’d see a customer send ten orders that would result in only one execution. That’s because the black box would cancel a lot of the orders. We’ve seen that rise from 20- to 30- to 50-to-one. The amount of orders in the marketplace increased exponentially.”8

In the case of the NYSE, electronic message traffic includes order submissions, cancellations, and trade reports that are handled by the NYSE’s SuperDOT system and captured in the NYSE’s System Order Data (SOD) database. The electronic message traffic measure for the NYSE excludes all specialist quoting, as well as all orders that are sent manually to the floor and are handled by a floor broker.

6See Biais and Weill (2009) for theoretical evidence on how AT relates to message traffic.

7See, e.g., Jonathan Keehner, “Massive surge in quotes, electronic messages may paralyse US market,” http://www.livemint.com/2007/06/14005055/Massive-surge-in-quotes-elect.html, June 14, 2007.

8See Shane Kite, “Reacting to market break, NYSE and NASDAQ act on capacity,” Securities Industry News, March 12, 2007.

(7)

As suggested by the quote above, an important issue is whether and how to normalize the message traffic numbers. The top half of Figure 1 shows the evolution of message traffic over time. We focus on the largest-cap quintile of stocks, as they constitute the vast bulk of stock market capitalization and trading activity. Immediately after decimalization at the start of 2001, the average large-cap stock sees about 35 messages per minute during the trading day. There are a few bumps along the way, but by the end of 2005 there are an average of about 250 messages per minute (more than 4 messages per second) for these same large-cap stocks. We could, of course, simply use the raw message traffic numbers, but there has been an increase in trading volume over the same interval, and without normalization a raw message traffic measure may just capture the increase in trading rather than the change in the nature of trading.

Therefore, for each stock each month we calculate our AT proxy, algo tradit, as the number of electronic messages per $100 of trading volume.9The normalized measure still rises rapidly over the 5-year sample period, while measures of market liquidity such as proportional spreads have declined sharply but appear to asymptote near the end of the sample period (see, e.g., the average quoted spreads in the top half of Figure 2 below), which occurs as more and more stocks are quoted with the minimum spread of $0.01.

The time-series evolution of algo tradit is displayed in the bottom half of Figure 1. For the largest-cap quintile, there is about $7,000 of trading volume per electronic message at the beginning of the sample in 2001, decreasing dramatically to about $1,100 of trading volume per electronic message by the end of 2005. Over time, smaller-cap stocks display similar time-series patterns.

It is worth noting that our AT proxies may also capture changes in trading strategies. For example, messages and algo traditwill increase if the same mar- ket participants use algorithms but modify their trading or execution strategies so that those algorithms submit and cancel orders more often. Similarly, the measure will increase if existing algorithms are modified to “slice and dice”

large orders into smaller pieces. This is useful, as we want to capture increases in the intensity of order submissions and cancellations by existing algorithmic traders, as well as the increase in the fraction of market participants employing algorithms in trading.

B. Liquidity Measures

We measure liquidity using quoted half-spreads, effective half-spreads, 5- minute and 30-minute realized spreads, and 5-minute and 30-minute price impacts, all of which are measured as share-weighted averages and expressed in basis points as a proportion of the prevailing midpoint. The effective spread is the difference between the midpoint of the bid and ask quotes and the actual

9Our results are virtually the same when we normalize by the number of trades or use raw message traffic numbers (see Table IA.4 in the Internet Appendix, available online in the “Supple- ments and Datasets” section at http://www.afajof.org/supplements.asp). The results are also the same when we use the number of cancellations rather than the number of messages to construct the AT measure.

(8)

8 The Journal of FinanceR

Figure 1. Algorithmic trading measures. For each market-cap quintile, where Q1 is the largest-cap quintile, these graphs depict (i) the number of (electronic) messages per minute and (ii) our proxy for algorithmic trading, which is defined as the negative of trading volume (in hundreds of dollars) divided by the number of messages.

(9)

Figure 2. Liquidity measures. These graphs depict (i) quoted half-spread, (ii) quoted depth, and (iii) effective spread. All spread measures are share volume-weighted averages within-firm that are then averaged across firms within each market-cap quintile, where Q1 is the largest-cap quintile.

(10)

10 The Journal of FinanceR

Figure 2. Continued

transaction price. The wider the effective spread, the less liquid is the stock. For the NYSE, effective spreads are more meaningful than quoted spreads because specialists and floor brokers are sometimes willing to trade at prices within the quoted bid and ask prices. For the tthtrade in stock j, the proportional effective half-spread, espreadjt, is defined as

espreadjt= qjt( pjt− mjt)/mjt, (1) where qjt is an indicator variable that equals +1 for buyer-initiated trades and−1 for seller-initiated trades, pjt is the trade price, and mjt is the quote midpoint prevailing at the time of the trade. We follow the standard trade- signing approach of Lee and Ready (1991) and use contemporaneous quotes to sign trades and calculate effective spreads (see Bessembinder (2003), for example). For each stock each day, we use all NYSE trades and quotes to calcu- late quoted and effective spreads for each reported transaction and calculate a share-weighted average across all trades that day. For each month we calculate the simple average across days. We also measure share-weighted quoted depth at the time of each transaction in thousands of dollars.

Figure 2 shows quite clearly that our measures of liquidity are generally improving over the sample period. Figure 1 shows that AT increases almost monotonically. The spread measures are not nearly as monotonic, with illiq- uidity spikes in both 2001 and 2002 that correspond to sharp stock market declines and increased volatility over the same sample period (see Figure IA.5 in the Internet Appendix). Nevertheless, one is tempted to conclude that these

(11)

two trends are related. The analysis below investigates exactly this relation- ship using formal econometric tools.

If spreads narrow when AT increases, it is natural to decompose the spread along the lines of Glosten (1987) to determine whether the narrower spread means less revenue for liquidity providers, smaller gross losses due to informed liquidity demanders, or both. We estimate revenue to liquidity providers using the 5-minute realized spread, which assumes the liquidity provider is able to close her position at the quote midpoint 5 minutes after the trade. The proportional realized spread for the tthtransaction in stock j is defined as

rspreadjt= qjt( pjt− mj,t+5min)/mjt, (2) where pjtis the trade price, qjtis the buy–sell indicator (+1 for buys, −1 for sells), mjtis the midpoint prevailing at the time of the tthtrade, and mj,t+5minis the quote midpoint 5 minutes after the tthtrade. The 30-minute realized spread is calculated analogously using the quote midpoint 30 minutes after the trade.

We measure gross losses to liquidity demanders due to adverse selection using the 5-minute price impact of a trade, adv selectionjt, defined using the same variables as

adv selectionjt= qjt(mj,t+5min− mjt)/mjt. (3) The 30-minute price impact is calculated analogously. Note that there is an arithmetic identity relating the realized spread, the adverse selection (price impact), and the effective spread espreadjt

espreadjt= rspreadjt+ adv selectionjt. (4) Figure 3 graphs the decomposition of the two spread components. Both re- alized spreads, rspreadit, and price impacts, adv selectionit, decline from 2001 to 2005. Most of the narrowed spread is due to a decline in adverse selection losses to liquidity demanders. Depending on the size quintile considered, 75%

to 90% of the narrowed spread is due to a smaller price impact.

So far, the graphical evidence shows time-series associations between AT and liquidity. The natural way to formally test this association is by regressing the various liquidity measures (Lit) on AT ( Ait) and variables controlling for market conditions (Xit):

Lit= αi+ β Ait+ δXit+ εit. (5) The problem is that AT is an endogenous choice made by traders. A trader’s decision to adopt AT could depend on many factors, including liquidity. For example, the evidence in Goldstein and Kavajecz (2004) indicates that humans are used more often when markets are illiquid and volatile. Econometrically, this means that the slope coefficient β from estimating equation (5) via OLS is not an unbiased estimate of the causal effect of a change in AT on liquidity.

Unless we have a structural model, the only way to identify the causal effect is to find an instrumental variable (IV) that affects AT but is uncorrelated with

(12)

12 The Journal of FinanceR

Figure 3. Spread decomposition into liquidity supplier revenues and adverse selection.

These graphs depict the two components of the effective spread: (i) realized spread and (ii) the adverse selection component, also known as the (permanent) price impact. Both are based on the quote midpoint 5 minutes after the trade. Results are graphed by market-cap quintile, where Q1 is the largest-cap quintile.

(13)

εit. Standard econometrics texts, for example, Greene (2007, Ch. 12), show that under these conditions, the resulting IV estimator consistently estimates the causal effect, in this case the effect of an exogenous change in AT on liquidity.

We discuss such an instrument in the next section.

III. Autoquote

In this section we provide an overview of our instrument, which is a change in NYSE market structure that causes an exogenous increase in AT.

As a result of the reduction of the minimum tick to a penny in early 2001 as part of decimalization, the depth at the inside quote shrank dramatically.

In response, the NYSE proposed that a “liquidity quote” for each stock be displayed along with the best bid and offer. The NYSE liquidity quote was designed to provide a firm bid and offer for substantial size, typically at least 15,000 shares, accessible immediately.10

At the time of the liquidity quote proposal, specialists were responsible for manually disseminating the inside quote.11 Clerks at the specialist posts on the floor of the exchange were typing rapidly and continuously from open to close and still were barely keeping up with order matching, trade reporting, and quote updating. In order to ease this capacity constraint and free up spe- cialists and clerks to manage a liquidity quote, the exchange proposed that it

“autoquote” the inside quote, disseminating a new quote automatically when- ever there was a relevant change to the limit order book. This would happen when a better-priced order arrived, when an order at the inside was canceled, when the inside quote was traded with in whole or in part, or when the size of the inside quote changed.

Note that the specialist’s structural advantages were otherwise unaffected by autoquote. A specialist could still disseminate a manual quote at any time in order to reflect his own trading interest or that of floor traders. Specialists continued to execute most trades manually, and they could still participate in those trades subject to the unchanged NYSE rules. NYSE market share remains unchanged at about 80% around the adoption of autoquote.

Autoquote was an important innovation for algorithmic traders because an automated quote update could provide more immediate feedback about the potential terms of trade. This speedup of a few seconds would provide critical new information to algorithms, but would be unlikely to directly affect the trading behavior of slower-reacting humans. Autoquote allowed algorithmic liquidity suppliers to, say, quickly notice an abnormally wide inside quote and provide liquidity accordingly via a limit order. Algorithmic liquidity demanders could quickly access this quote via a conventional market or marketable limit order or by using the NYSE’s automated execution facility for limit orders of

10For more details, the NYSE proposal is contained in Securities Exchange Act Release No.

47091 (December 23, 2002), 68 FR 133.

11One exception: NYSE software would automatically disseminate an updated quote after 30 seconds if the specialist had not already done so.

(14)

14 The Journal of FinanceR

Figure 4. Autoquote introduction. This graph depicts the staggered introduction of autoquote on the NYSE. It graphs the number of stocks in each market-cap quintile that are autoquoted at a given time. Quintile 1 contains largest-cap stocks.

1,099 shares or less. In the next section, we show that autoquote is positively correlated with our AT measure, which is one of the requirements for autoquote to be a valid instrument.

The NYSE began to phase in the autoquote software on January 29, 2003, starting with six active, large-cap stocks. During the next 2 months, over 200 additional stocks were phased in at various dates, and all remaining NYSE stocks were phased in on May 27, 2003.12Figure 4 provides additional details on the phase-in process. The rollout order was determined in late 2002. Early stocks tended to be active large-cap stocks because the NYSE felt that these stocks would benefit most from the liquidity quote. Beyond that criterion, con- versations with those involved at the NYSE indicate that early phase-in stocks were chosen mainly because the specialist assigned to that stock was receptive to new technology.

The phase-in is particularly important to our empirical design. It allows us to take out all market-wide changes in liquidity, and identify the causal effect of AT by comparing autoquoted stocks to non-autoquoted stocks using a difference-in-differences methodology. The IV methodology discussed below incorporates data before and after each NYSE stock’s autoquote adoption so the estimated effect of AT on liquidity incorporates every stock’s autoquote

12Liquidity quotes were delayed due to a property rights dispute with data vendors, so they did not become operational until June 2003, after autoquote was fully phased in. Liquidity quotes were almost never used and were formally abandoned in July 2005.

(15)

transition, whenever it occurs. Thus, even if the phase-in order is determined by other unknown criteria, our empirical methodology remains valid in most cases.

For example, there is no bias if the phase-in is determined by the specialist’s receptiveness to new technology, and this is correlated with the amount of AT in his stocks. There are a small number of problematic phase-in scenarios, however. We discuss these next.

For the staggered introduction of autoquote to serve as a valid instrument, it must satisfy the exclusion restriction. Specifically, a stock’s move to autoquote must not be correlated with the error term in that firm’s liquidity equation (equation (5)). This does not mean that the autoquote rollout must be assigned randomly. The liquidity equation includes a firm fixed effect, calendar dummies, and a set of control variables. The instrument remains valid even if the rollout schedule is related to these particular explanatory variables. For instance, if the stocks chosen for early phase-in tend to have high mean liquidity, this would be picked up by the firm fixed effect and the exclusion restriction would still hold.

In fact, due to the explanatory variables, the exclusion restriction is violated only if the autoquote phase-in schedule is somehow related to contemporaneous changes in firm-specific, idiosyncratic liquidity that are not due to changes in AT.

Thus, it is quite helpful that the rollout schedule for autoquote was fixed months in advance, as it seems highly unlikely that the phase-in schedule could be correlated with idiosyncratic liquidity months into the future. The only way this might happen is if there are sufficiently persistent but temporary shocks to idiosyncratic liquidity. For example, if temporarily illiquid stocks are chosen for early phase-in, these stocks might still be illiquid when autoquote begins, and their liquidity would improve post-autoquote as they revert to mean liquidity, thereby overstating the causal effect.13To investigate this possibility, we study the dynamics of liquidity using an AR(1) model of effective spreads for each firm in the sample. Table IA.3 of the Internet Appendix shows that the average AR(1) coefficient is 0.18, corresponding to a half-life of less than a day.14 We also do not find statistical support for the conjecture that stocks that migrate experience unusual liquidity just ahead of the migration. More precisely, the predicted effective spread based on all information up until the day before the introduction, including liquidity covariates, is not significantly different from its unconditional mean. All of this supports the exogeneity of our instrument.

Lastly, the exclusion restriction requires autoquote to affect liquidity only via AT. We have argued that autoquote’s time scale is only relevant for algo- rithms and that autoquote does not directly affect liquidity via nonalgorithmic trading.15 However, we cannot test this conjecture using the available data.

13Late phase-in stocks will not offset this effect. Even if late phase-in stocks are temporarily liquid when chosen, this temporary effect has more time to die out by the time autoquote is implemented for them.

14Also because the average daily AR(1) coefficient is quite small, there is little scope for the bias that can arise in dynamic panel data models with strong persistence. See, for example, Arellano (2003, Ch. 6.2).

15For example, autoquote could simply make the observed quotes less stale. We investigate

(16)

16 The Journal of FinanceR

Thus, it is important to emphasize that our conclusions on causality rely on the intuitively appealing but ultimately untestable assumption that autoquote affects liquidity only via its effect on AT.

IV. AT’s Impact on Liquidity

To study the effects of autoquote, we build a daily panel of NYSE common stocks. The sample begins on December 2, 2002, which is approximately 2 months before the autoquote phase-in begins, and it extends through July 31, 2003, about 2 months after the last batch of NYSE stocks moves to the auto- quote regime. We use standard price filters: stocks with an average share price of less than $5 or more than $1,000 are removed. To make our various auto- quote analyses comparable, we use the same sample of stocks throughout this section. The Hasbrouck (1991a,b) decomposition (discussed below in Section VI) has the most severe data requirements, so we retain all stocks that have at least 21 trades per day for each day in the 8-month sample period. This leaves 1,082 stocks in the sample. The shorter time period for the autoquote sample allows for a larger balanced panel compared to the 5-year balanced panel used to create Figures 1–3.

Next, we sort stocks into quintiles based on market capitalization. Quintile 1 refers to large-cap stocks and quintile 5 corresponds to small-cap stocks. Table I contains means by quintile and standard deviations for all of the variables used in the analysis. All variables used in the analysis are 99.9% winsorized: values smaller than the 0.05% quantile are set equal to that quantile, and values larger than the 99.95% quantile are set equal to that quantile.

Autoquote clearly leads to greater use of algorithms. Figure 1 shows that message traffic increases by about 50% in the most active quintile of stocks as autoquote is phased in. It is certainly hard to imagine that autoquote would change the behavior of humans by anything close to this magnitude. However, nowhere in the paper do we rely on this time-series increase in AT. Instead, we include stock fixed effects and time fixed effects (day dummies), so that we identify the effect of the market structure change via its staggered introduction.

The presence of these two-way fixed effects means that we are comparing the changes experienced by autoquoted stocks to the changes in not-yet-autoquoted control stocks.

We begin by estimating the following first-stage regression:

Mit= αi+ γt+ β Qit+ εit, (6) where Mitis the relevant dependent variable, for example, the number of elec- tronic messages per minute, Qit, is the autoquote dummy set to zero before the

this possibility in Section I of the Internet Appendix and find that this mechanical explanation is unlikely to account for our results.

(17)

TableI SummaryStatistics ThistablepresentssummarystatisticsondailydatafortheperiodDecember2002throughJuly2003.Thisperiodcoversthephase-inofautoquote, usedasaninstrumentintheinstrumentalvariableanalysis.ThedatasetcombinesTAQ,CRSP,andtheNYSESystemOrderData(SOD)database. Thebalancedpanelconsistsof1,082stockssortedintoquintilesbasedonmarketcapitalization,wherequintile1containslargest-capstocks.All variablesare99.9%winsorized.Thewithinstandarddeviationisbasedondayt’sdeviationrelativetothetimemean,thatis,x∗ it=xitxi. MeanMeanMeanMeanMeanSt.Dev. VariableDescription(Units)SourceQ1Q2Q3Q4Q5Within qspreaditsharevolume-weightedquotedhalf-spread(bps)TAQ5.196.829.1711.6819.894.84 qdepthitsharevolume-weighteddepth($1,000)TAQ71.2241.8531.4324.1215.7623.42 espreaditsharevolume-weightedeffectivehalf-spread (bps)TAQ3.634.796.568.4614.503.73 rspreaditsharevolume-weightedrealizedhalf-spread, 5min(bps)TAQ1.211.441.881.974.344.71 advselectionitsharevolume-weightedadverseselection componenthalf-spread,5min, “effective-realized”(bps)

TAQ2.423.354.696.5010.165.12 messagesit#electronicmessagesperminute,aproxyfor algorithmicactivity(/minute)SOD119.3053.9029.8119.3310.4415.55 algotraditdollarvolumeperelectronicmessagetimes(−1), aproxyforalgorithmictrading($100)TAQ/SOD18.4410.998.056.394.614.54 dollarvolumeitaveragedailyvolume($million)TAQ94.7124.0910.125.322.1722.72 tradesit#tradesperminute(/minute)TAQ5.722.921.781.240.720.72 shareturnoverit(annualized)shareturnoverTAQ/CRSP1.111.521.481.451.301.16 volatilityitstandarddeviationopen-to-closereturnsbased ondailypricerange,thatis,highminuslow, Parkinson(1980),(%)

CRSP1.471.561.631.742.060.85 priceitdailyclosingprice($)CRSP40.0132.0525.8623.9316.413.46 marketcapitsharesoutstandingtimesprice($billion)CRSP28.994.091.710.900.411.96 tradesizeittradesize($1,000)TAQ37.5619.4113.069.736.618.03 specialistparticipitspecialistparticipationrate(%)SOD13.0712.9713.0813.7315.843.92 #observations:1,082167(stockday)

(18)

18 The Journal of FinanceR

Table II

Autoquote Impact on Messages, Algorithmic Trading Proxy, and Covariates

This table shows the impact of autoquote on other variables, and the second column can be in- terpreted as the first-stage instrumental variables regression when algo traditis the dependent variable. The analysis is based on daily observations from December 2002 through July 2003, which covers the phase-in of autoquote. We regress each of the variables used in the IV analysis on the autoquote dummy (auto quoteit) using the following specification:

Mit= αi+ γt+ β Qit+ εit,

where Mitis the relevant dependent variable, for example, the number of electronic messages per minute, Qitis the autoquote dummy set to zero before the autoquote introduction and one afterward,αiis a stock fixed effect, andγtis a day dummy. There are also separate regressions for each size quintile, and statistical significance is based on standard errors that are robust to general cross-section and time-series heteroskedasticity and within-group autocorrelation (see Arellano and Bond (1991)). Table I provides other variable definitions.∗/∗∗denote significance at the 95%/99% level.

algo share ln market

messagesit tradit turnoverit volatilityit 1/priceit capit Slope coefficient from regression of column variable on auto quoteit

All 2.135∗∗ 0.291∗∗ −0.016∗∗ 0.001 0.000∗∗ −0.003∗∗

Q1 (largest cap) 6.286∗∗ 0.414∗∗ 0.016 −0.003 0.000∗∗ −0.005∗∗

Q2 0.880∗∗ 0.396∗∗ −0.029 0.007 −0.000∗∗ 0.003∗∗

Q3 0.944∗∗ 0.292∗∗ 0.002 −0.001 0.000 −0.004∗∗

Q4 0.223∗∗ 0.029 −0.006 −0.003 −0.000 0.002

Q5 (smallest cap) −0.031 0.219∗∗ −0.080∗∗ 0.003 0.002∗∗ −0.013∗∗

autoquote introduction and one afterward;αi is a stock fixed effect; andγtis a day dummy. There are also separate regressions for each size quintile.

Table II reports the slope coefficients for this specification. When the depen- dent variable Mit is the number of electronic messages per minute for stock i on day t, we find a significant positive relationship. The coefficient of 2.135 on autoquote implies that autoquote increases message traffic by an average of two messages per minute. In December 2002, the month before autoquote begins its rollout, our sample stocks average 36 messages per minute, so au- toquote causes a 6% increase in message traffic on average. Associations are stronger for large-cap stocks, consistent with the conventional wisdom that AT was more prevalent at the time for active, liquid stocks.

Table II also shows that there is a significant positive relationship between the autoquote dummy and our preferred measure of AT, algo tradit, which is the negative of dollar volume in hundreds per electronic message. Thus, it is clear that autoquote leads to more AT in all but the smallest quintiles.16There

16In the IV regressions in Tables III to V we report F statistics that reject the null that the instruments do not enter the first-stage regression. Bound et al. (1995, p. 446) mention that “F statistics close to 1 should be cause for concern.” Our F statistics range from 5.88 to 7.32, and we are thus not afflicted with a weak instruments problem.

(19)

is no consistent relationship between autoquote and any other variable, such as turnover, volatility, and share price.

Our principal goal is to understand the effects of algorithmic liquidity supply on market quality, and so we use the autoquote dummy as an instrument for AT in a panel regression framework. Our main instrumental variables specification is a daily panel of 1,082 NYSE stocks over the 8-month sample period spanning the staggered implementation of autoquote. The dependent variable is one of five liquidity measures: quoted half-spreads, effective half- spreads, realized spreads, or price impacts, all of which are share volume- weighted and measured in basis points, or the quoted depth in thousands of dollars. We include fixed effects for each stock as well as time dummies, and we include share turnover, volatility based on the daily price range (high minus low, see Parkinson (1980)), the inverse of share price, and the log of market cap as control variables. Results are virtually identical if we exclude these control variables. Based on anecdotal information that AT was relatively more important for active large-cap stocks during this time period, we estimate this specification separately for each market-cap quintile.

The estimated equation is

Lit= αi+ γt+ β Ait+ δXit+ εit, (7) where Lit is a spread measure for stock i on day t, Ait is the AT measure algo tradit, and Xit is a vector of control variables, including share turnover, volatility, the inverse of share price, and log market cap. We always include fixed effects and time dummies. The set of instruments consists of all explana- tory variables, except that we replace algo tradit with auto quoteit. Inference is based on standard errors that are robust to general cross-section and time- series heteroskedasticity and within-group autocorrelation (see Arellano and Bond (1991)). Section II of the Internet Appendix shows that the IV regression is unaffected by the use of a proxy for AT, as long as the noise in the proxy is uncorrelated with the autoquote instrument.

The results are reported in Panel A of Table III and the most reliable effects are in larger stocks. For large-cap stocks (quintiles 1 and 2), the autoquote instrument shows that an increase in algorithmic liquidity supply narrows both the quoted and effective spread. To interpret the estimated coefficient on the AT variable, recall that the AT measure algo tradit is the negative of dollar volume per electronic message, measured in hundreds of dollars, while the spread is measured in basis points. Thus, the IV estimate of −0.53 on the AT variable for quintile 1 means that a unit increase in AT, for example, from the sample mean of $1,844 to $1,744 of volume per message, implies that quoted spreads narrow by 0.53 basis points.17The average within-stock standard deviation for algo traditis 4.54, or $454, so a one-standard deviation

17Table IA.4 in the Internet Appendix contains additional analysis showing that the message traffic component of algo traditdrives the decline in spreads.

參考文獻

相關文件

Because both sets R m  and L h i ði ¼ 1; 2; :::; JÞÞ are second-order regular, similar to [19, Theorem 3.86], we state in the following theorem that there is no gap between

With the aid of a supply - demand diagram, explain how the introduction of an effective minimum wage law would affect the wage and the quantity of workers employed in that

In addressing the questions of its changing religious identities and institutional affiliations, the paper shows that both local and global factors are involved, namely, Puhua

* School Survey 2017.. 1) Separate examination papers for the compulsory part of the two strands, with common questions set in Papers 1A & 1B for the common topics in

Microphone and 600 ohm line conduits shall be mechanically and electrically connected to receptacle boxes and electrically grounded to the audio system ground point.. Lines in

However, the SRAS curve is upward sloping, which indicates that an increase in the overall price level tends to raise the quantity of goods and services supplied and a decrease in

However, the SRAS curve is upward sloping, which indicates that an increase in the overall price level tends to raise the quantity of goods and services supplied and a decrease in

CAST: Using neural networks to improve trading systems based on technical analysis by means of the RSI financial indicator. Performance of technical analysis in growth and small