1. Jonathan Berk paper

    Main Results and Relevance to the Q Group Mission

    A central question in asset management is how to correctly measure risk. Fifty years ago Jack Treynor, along with Bill Sharpe and others, developed the Capital Asset Pricing Model (CAPM). However, in the intervening years the validity of the CAPM as a measure of risk has been questioned. In response, researchers have developed extensions to the original model that better explain the cross sectional distribution of expected returns.

    The implicit assumption in this research agenda is that a model that better explains cross sectional variation in returns necessarily better explains risk differences. But this assumption is problematic. To see why, consider the following analogy. Rather than look for an alternative theory, early astronomers reacted to the inability of the Ptolemaic theory to explain the motion of the planets by fixing” each observational inconsistency by adding an additional epicycle to the theory. By the time Copernicus proposed the correct theory that the Earth revolved around the Sun, the Ptolemaic theory had been fixed so many times it better explained the motion of the planets than the Copernican system. (Copernicus wrongly assumed that the planets followed circular orbits when in fact their orbits are ellipses.) Similarly, although the extensions to the CAPM better explain the cross section of asset returns, it is hard to know, using traditional tests, whether these extensions represent true progress towards measuring risk or simply the asset pricing equivalent of an epicycle. To determine whether any extention to the CAPM better explains risk, one needs to confront the models with facts they were not designed to explain.

    In this paper we design a new test of asset pricing models by starting with the obser- vation at all asset pricing models assume that investors compete fiercely with each other tofind positive net present value investment opportunities, and in doing so, eliminate them. As a consequence of this competition, equilibrium prices are set so that the ex- pected return of every asset is solely a function of its risk. When a positive net present value (NPV) investment opportunity presents itself in capital markets (that is, an asset is mispriced relative to the model investors are using) investors react by submitting buy or sell orders until the opportunity no longer exists (the mispricing is removed). These buy and sell orders reveal the preferences of investors and therefore they reveal which asset pricing model investors are using. That is, by observing these orders one can infer which asset pricing model investors use to price risk. xedWe demonstrate that we can implement this test using mutual fund data. We derive a simple test statistic that allows us to infer, from a set of candidate models, the model that is closest to the asset pricing model investors are actually using. We find that the CAPM is the closest model. That is, none of the extensions to the original model better explains investor behavior. Importantly, the CAPM better explains investor decisions than no model at all, indicating that investors do price risk. Most surprisingly, the CAPM also outperforms a naive model in which investors ignore beta and simply chase any outperformance relative to the market portfolio. Investors’ capital allocation decisions reveal that they measure risk using the CAPM beta

    Assessing Asset Pricing Models
    using Revealed Preference


    Jonathan B. Berk
    Stanford University and NBER

    Jules H. van Binsbergen
    University of Pennsylvania and NBER

    September 2013
    This draft: August 11, 2015


    We propose a new method of testing asset pricing models that relies on using quantities rather than simply prices or returns. We use the capital ows into and out of mutual funds to infer which risk model investors use. We derive a simple test statistic that allows us to infer, from a set of candidate models, the model that is closest to the model that investors use in making their capital allocation decisions. Using our method, we assess the performance of the most commonly used asset pricing models in the literature.

    * We are grateful to John Cochrane, George Constantinides, Peter DeMarzo, Wayne Ferson, Ravi Ja-gannathan, Valentine Haddad, Lars Hansen, John Heaton, Binying Lui, Tim McQuade, Lubos Pastor,Paul Peiderer, Monika Piazzesi, Anamaria Pieschacon, Martin Schneider, Ken Singleton, Rob Stam-baugh, and seminar participants at the 2015 AFA meetings, Harvard, the Kellogg Junior Finance Confer-ence, Notre Dame, Princeton University, Stanford GSB, the Stanford Institute for Theoretical Economics (SITE), the University of Chicago, University of Washington Summer Finance Conference and Washing-ton University in St. Louis for their comments and suggestions.

    All neoclassical capital asset pricing models assume that investors competefiercely with each other to find positive net present value investment opportunities, and in doing so, eliminate them. As a consequence of this competition, equilibrium prices are set so that the expected return of every asset is solely a function of its risk. When a positive net present value (NPV) investment opportunity presents itself in capital markets (that is, an asset is mispriced relative to the model investors are using) investors react by submitting buy or sell orders until the opportunity no longer exists (the mispricing is removed). These buy and sell orders reveal the preferences of investors and therefore they reveal which asset pricing model investors are using. By observing whether or not buy and sell orders occur in reaction to the existence of positive net present value investment opportunities as defined by a particular asset pricing model, one can infer whether investors price risk using that asset pricing model.

    There are two criteria that are required to implement this method. First, one needs a mechanism that identifies positive net present value investment opportunities. Second, one needs to be able to observe investor reactions to these opportunities. We demonstrate that we can satisfy both criteria if we implement the method using mutual fund data. Under the assumption that a particular asset pricing model holds, we use the main insight from Berk and Green (2004) to show that positive (negative) abnormal return realizations in a mutual fund investment must be associated with positive net present value buying (selling) opportunities. We then measure investor reactions to these opportunities by observing the subsequent capital ow into (out of) mutual funds.

    Using this method, we derive a simple test statistic that allows us to infer, from a set of candidate models, the model that is closest to the asset pricing model investors are actually using. Our test can be implemented by running a simple univariate ordinary least squares regression using the t -statistic to assess statistical significance. We illustrate our method by testing the following models: the Capital Asset Pricing Model (CAPM), originally derived by Sharpe (1964), Lintner (1965), Mossin (1966) and Treynor (1961), the reduced form factor models specified by Fama and French (1993) and Carhart (1997) (that are motivated by Ross (1976)) and the dynamic equilibrium models derived by Merton (1973), Breeden (1979), Campbell and Cochrane (1999), Kreps and Porteus (1978), Epstein and Zin (1991) and Bansal and Yaron (2004).

    We find that the CAPM is the closest model to the model that investors use to make their capital allocation decisions. Importantly, the CAPM better explains ows than no model at all, indicating that investors do price risk. Most surprisingly, the CAPM also outperforms a naive model in which investors ignore beta and simply chase any outperformance relative to the market portfolio. Investors’ capital allocation decisions reveal that they measure risk using the CAPM beta.

    Our result, that investors appear to be using the CAPM to make their investment decisions, is very surprising in light of the well documented failure of the CAPM to adequately explain the cross-sectional variation in expected stock returns. Although, ultimately, we leave this as a puzzle to be explained by future research, we do note that much of the ows in and out of mutual funds remain unexplained. To that end the paper leaves as an unanswered question whether the unexplained part of ows results because investors use a superior, yet undiscovered, risk model, or whether investors use other, non-risk based, criteria to make investment decisions.

    It is important to emphasize that implementing our test requires accurate measure- ment of the variables that determine the Stochastic Discount Factor (SDF). In the case of the CAPM, the SDF is measured using market prices which contain little or no mea- surement error, and more importantly, can be observed by investors as accurately as by empiricists. Testing the dynamic equilibrium models relies on observing variables such as consumption, which investors can measure precisely (they presumably know their own consumption) but empiricists cannot, particularly over short horizons. Consequently our tests cannot differentiate whether these models underperform because they rely on vari- ables that are diffcult to measure, or because the underlying assumptions of these models are awed.

    Because we implement our method using mutual fund data, one might be tempted to conclude that our tests only reveal the preferences of mutual fund investors, rather than all investors. But this is not the case. When an asset pricing model correctly prices risk, it rules out positive net present value investment opportunities in all markets. Even if no investor in the market with a positive net present value opportunity uses the asset pricing model under consideration, so long as there are investors in other markets that use the asset pricing model, those investors will recognize the positive net present value opportunity and will act to eliminate it. That is, if our test rejects a particular asset pricing model, we are not simply rejecting the hypothesis that mutual fund investors use the model, but rather, we are rejecting the hypothesis that any investor who could invest in mutual funds uses the model.

    Of course, the possibility exists that investors are not using a risk model to price assets. In that case our tests only reveal the preferences of mutual fund investors because it is possible, in this world, for investors in other markets to be uninterested in exploiting positive net present value investment opportunities in the mutual fund market. However, mutual fund investors actually represent a very large fraction of all investors. In 2013, 46% percent of households invested in mutual funds. More importantly, this number rises to 81% for households with income that exceeds $100,000. 1

    The first paper to use mutual fund ows to infer investor preferences is Guercio and Tkac (2002). Although the primary focus of their paper is on contrasting the inferred behavior of retail and institutional investors, that paper documents ows respond to outperformance relative to the CAPM. The paper does not consider other risk models. Clifford, Fulkerson, Jordan, and Waldman (2013) study the effect of increases in idiosyn- cratic risk on in ows and out ows separately (rather than the net ow) and show that both in ows and out ows increase when funds take on more idiosyncratic risk (as defined by the Fama-French-Carhart factor specification). In work subsequent to ours, Barber, Huang, and Odean (2014) use fund ows to infer investor risk preferences and also find (using a different method) that the investors use the CAPM rather than the other reduced form factor models that have been proposed.

    1 A New Asset Pricing Test

    The core idea that underlies every neoclassical asset pricing model in economics is that prices are set by agents chasing positive net present value investment opportunities. Whenfinancial markets are perfectly competitive, these opportunities are competed away so that, in equilibrium, prices are set to ensure that no positive net present value opportuni- ties exist. Prices respond to the arrival of new information by instantaneously adjusting to eliminate any positive net present value opportunities that arise. It is important to appreciate that this price adjustment process is part of all asset pricing models, either explicitly (if the model is dynamic) or implicitly (if the model is static). The output of all these models { a prediction about expected returns { relies on the assumption that this price adjustment process occurs.

    The importance of this price adjustment process has long been recognized byfinancial economists and forms the basis of the event study literature. In that literature, the asset pricing model is assumed to be correctly identified. In that case, because there are no positive net present value opportunities, the price change that results from new information (i.e., the part of the change not explained by the asset pricing model) measures the value of the new information.

    Because prices always adjust to eliminate positive net present value investment oppor- tunities, under the correct asset pricing model, expected returns are determined by risk

    1 As reported in the 2014 Investment Company Fact Book, Chapter Six, Figures 6.1 and 6.5 (see http://www.icifactbook.org ).
    alone. Modern tests of asset pricing theories test this powerful insight using return data. Rejection of an asset pricing theory occurs if positive net present value opportunities are detected, or, equivalently, if investment opportunities can be found that consistently yield returns in excess of the expected return predicted by the asset pricing model. The most important shortcoming in interpreting the results of these tests is that the empiricist is never sure that a positive net present value investment opportunity that is identified ex post was actually available ex ante . 2

    An alternative testing approach, that does not have this shortcoming, is to identify positive net present value investment opportunities ex ante and test for the existence of an investor response. That is, do investors react to the existence of positive net present value opportunities that result from the revelation of new information? Unfortunately, for mostfinancial assets, investor responses to positive net present value opportunities are diffcult to observe. As Milgrom and Stokey (1982) show, the price adjustment process can occur with no transaction volume whatsoever, that is, competition is sofierce that no investor benefits from the opportunity. Consequently, for mostfinancial assets the only observable evidence of this competition is the price change itself. Thus testing for investor response is equivalent to standard tests of asset pricing theory that use return data to look for the elimination of positive net present value investment opportunities.

    The key to designing a test to directly detect investor responses to positive net present value opportunities is to find an asset for which the price is fixed. In this case the market equilibration must occur through volume (quantities). A mutual fund is just such an asset. The price of a mutual fund is always fixed at the price of its underlying assets, or the net asset value (NAV). In addition, fee changes are rare. Consequently, if, as a result of new information, an investment in a mutual fund represents a positive net present value investment opportunity, the only way for investors to eliminate the opportunity is by trading the asset. Because this trade is observable, it can be used to infer investments investors believe to be positive net present value opportunities. One can then compare those investments to the ones the asset pricing model under consideration identifies to be positive net present value and thereby infer whether investors are using the asset pricing model. That is, by observing investors’ revealed preferences in their mutual fund investments, we are able to infer information about what (if any) asset pricing model they are using.

    2 For an extensive analysis of this issue, see Harvey, Liu, and Zhu (2014).

    1.1 The Mutual Fund Industry

    Mutual fund investment represents a large and important sector in U.S.financial markets. In the last 50 years there has been a secular trend away from direct investing. Individual investors used to make up more than 50% of the market, today they are responsible for barely 20% of the total capital investment in U.S. markets. During that time, there has been a concomitant rise in indirect investment, principally in mutual funds. Mutual funds used to make up less than 5% of the market, today they make up 1/3 of total investment. 3 Today, the number of mutual funds that trade in the U.S. outnumber the number of stocks that trade.

    Berk and Green (2004) derive a model of how the market for mutual fund investment equilibrates that is consistent with the observed facts. 4 They start with the observation that the mutual fund industry is like any industry in the economy | at some point it displays decreasing returns to scale. 5 Given the assumption under which all asset pricing models are derived (perfectly competitivefinancial markets), this observation immediately implies that all mutual funds must have enough assets under management so that they face decreasing returns to scale. When new information arrives that convinces investors that a particular mutual fund represents a positive net present value investment, investors react by investing more capital in the mutual fund. This process continues until enough new capital is invested to eliminate the opportunity. As a consequence, the model is able to explain two robust empirical facts in the mutual fund literature: that mutual fund ows react to past performance while future performance is largely unpredictable. 6 Investors \chase” past performance because it is informative: mutual fund managers that do well (poorly) have too little (much) capital under management. By competing to take advantage of this information, investors eliminate the opportunity to predict future performance.

    A key assumption of the Berk and Green (2004) model is that mutual fund managers are skilled and that this skill varies across managers. Berk and van Binsbergen (2013) verify this fact. They demonstrate that such skill exists and is highly persistent. More importantly, for our purposes, they demonstrate that mutual fund ows contain useful information. Not only do investors systematically direct ows to higher skilled managers,

    3 See French (2008).
    4 Stambaugh (2014) derives a general equilibrium version of this model based on the model in Pastor and Stambaugh (2012).
    5 Pastor, Stambaugh, and Taylor (2015) provide empirical evidence supporting this assumption.
    6 An extensive literature has documented that capital ows are responsive to past returns (see Chevalier and Ellison (1997) and Sirri and Tufano (1998)) and future investor returns are largely unpredictable (see Carhart (1997)).

    but managerial compensation, which is primarily determined by these ows, predicts future performance as far out as 10 years. Investors know who the skilled managers are and compensate them accordingly. It is this observation that provides the starting point for our analysis. Because the capital ows into mutual funds are informative, they reveal the asset pricing model investors are using.

    1.2 Private Information

    Most asset pricing models are derived under the assumption that all investors are sym- metrically informed. Hence, if one investor faces a positive NPV investment opportunity, all investors face the same opportunity and so it is instantaneously removed by competi- tion. The reality is somewhat different. The evidence in Berk and van Binsbergen (2013) of skill in mutual fund management implies that at least some investors have access to different information or have different abilities to process information. As a result, under the information set of this small set of informed investors, not all positive net present value investment opportunities are instantaneously competed away.

    As Grossman (1976) argued, in a world where there are gains to collecting information and information gathering is costly, not everybody can be equally informed in equilibrium. If everybody chooses to collect information, competition between investors ensures that prices reveal the information and so information gathering is unprofitable. Similarly, if nobody collects information, prices are uninformative and so there are large profits to be made collecting information. Thus, in equilibrium, investors must be differentially informed (see, e.g., Grossman and Stiglitz (1980)). Investors with the lowest information gathering costs collect information so that, on the margin , what they spend on information gathering, they make back in trading profits. Presumably these investors are few in number so that the competition between them is limited, allowing for the existence of prices that do not fully reveal their information. As a result, information gathering is a positive net present value endeavor for a limited number of investors.

    The existence of asymmetrically informed investors poses a challenge for empiricists wishing to test asset pricing models derived under the assumption of symmetrically in- formed investors. Clearly, the empiricist’s information set matters. For example, asset pricing models fail under the information set of the most informed investor, because the key assumption that asset markets are competitive is false under that information set. Consequently, the standard in the literature is to assume that the information set of the uninformed investors only contains publicly available information all of which is already impounded in all past and present prices, and to conduct the test under that information set. For now, we will adopt the same strategy but will revisit this assumption in Section 5.2, where we will explicitly consider the possibility that the majority of investors’ infor- mation sets includes more information than just what is already impounded in past and present prices.










    3 Results

    We use the mutual fund data set in Berk and van Binsbergen (2013). The data set spans the period from January 1977 to March 2011. We remove all funds with less than 5 years of data leaving 4275 funds.11 Berk and van Binsbergen (2013) undertook an extensive data project to address several shortcomings in the CRSP database by combining it with Morningstar data, and we refer the reader to the data appendix of that paper for the details.

    To implement the tests derived in Propositions 2 and 5 it is necessary to pick an observation horizon. For most of the sample, funds report their AUMs monthly, however in the early part of the sample many funds report their AUMs only quarterly. In order not to introduce a selection bias by dropping these funds, the shortest horizon we will consider is three months. Furthermore, as pointed out above, we need a horizon length of more than a month to compute the outperformance measure for the dynamic equilibrium models. If investors react to new information immediately, then ows should immediately respond to performance and the appropriate horizon to measure the effect would be the shortest horizon possible. But in reality there is evidence that investors do not respond

    11 We chose to remove these funds to ensure that incubation ows do not in uence our results. Changing the criterion to 2 years does not change our results. These results are available on request.

    immediately. Mamaysky, Spiegel, and Zhang (2008) show that the net alpha of mutual funds is predictably non-zero for horizons shorter than a year, suggesting that capital does not move instantaneously. There is also evidence of investor heterogeneity because some investors appear to update faster than others. 12 For these reasons, we also consider longer horizons (up to four years). The downside of using longer horizons is that longer horizons tend to put less weight on investors who update immediately, and these investors are also the investors more likely to be marginal in setting prices. To ensure that we do not inadvertently introduce autocorrelation in the horizon returns across funds, we drop all observations before the first January observation for a fund, that is, we thereby insure that the first observation for all funds occurs in January.

    The ow of funds is important in our empirical specification because it affects the alpha generating technology as specified by h(.). Consequently, we need to be careful to ensure that we only use the part of capital ows that affects this technology. For example, it does not make sense to include as an in ow of funds, increases in fund sizes that result from in ation because such increases are unlikely to affect the alpha generating process. Similarly, the fund’s alpha generating process is unlikely to be affected by changes in size that result from changes in the price level of the market as a whole. Consequently, we will measure the ow of funds over a horizon of length T as








    Table 4: Tests of Statistical Significance: The first two columns in the table provides the coefficient estimate and double-clustered t -statistic (see Thompson (2011) and the discussion in Petersen (2009)) of the univariate regression of signed ows on signed out- performance. The rest of the columns provide the statistical significance of the pairwise test, derived in Proposition 5, of whether the models are better approximations of the true asset pricing model. For each model in a column, the table displays the double-clustered t-statistic of the test that the model in the row is a better approximation of the true asset pricing model, that is, that ßF rowF column.The rows (and columns) are ordered by ßFe, with the best performing model on top. The number following the long run risk models denotes the percentage of the wealth portfolio invested in bonds.

    we can reject the hypothesis that investors just react to past returns. The next possibility is that investors are risk neutral. In an economy with risk neutral investors we would find that the excess return best explains ows, so the performance of this model can be assessed by looking at the columns labeled \Ex. Ret.” Notice that all the risk models nest this model, so to conclude that a risk model better approximates the true model, the risk model must statistically outperform this model. The factor models all satisfy this criterion, allowing us to conclude that investors are not risk neutral. Unfortunately, none of the dynamic asset pricing model satisfy this criterion. Finally, one might hypothesize that investors benchmark their investments relative to the market portfolio alone, that is, they do not adjust for any risk differences (beta) between their investment and the market. The performance of this model is reported in the column labeled \Ex. Mkt.” Again, all the factor models statistically significantly outperform this model | investors actions reveal that they adjust for risk using beta. We view this result as the most surprising result in the paper.

    Our results also allow us to discriminate between the factor models. Recall that both the FF and FFC factor specifications nest the CAPM, so to conclude that either factor model better approximates the true model, it must statistically significantly outperform the CAPM. The test of this hypothesis is in the columns labeled \CAPM.” Neither factor model statistically outperforms the CAPM at any horizon. Indeed, at all horizons the CAPM actually outperforms both factor models implying that the additional factors add no more explanatory power for ows.

    The relative performance of the dynamic equilibrium models is poor. We can confidently reject the hypothesis that any of these models is a better approximation of the true model than the CAPM. But this result should be interpreted with caution. These models rely on variables like consumption which are notoriously difficult for empiricists to measure, but are observed perfectly by investors themselves.

    It is also informative to compare the tests of statistical significance across horizons. The ability to statistically discriminate between the models deteriorates as the horizon increases. This is what one would expect to observe if investors instantaneously moved capital in response to the information in realized returns. Thus, this evidence is consistent with the idea that capital does in fact move quickly to eliminate positive net present value investment opportunities.

    The evidence that investors appear to be using the CAPM is puzzling given the inabil- ity of the CAPM to correctly account for cross-sectional differences in average returns. Although providing a complete explanation of this puzzling finding is beyond the scope of this paper, in the next section we will consider a few possible explanations. We will leave the question of which, if any, explanation resolves this puzzle to future research.

    4 Implications

    The empirical finding that the CAPM does a poor job explaining cross-sectional variation in expected returns raises a number of possibilities about the relation between risk and return. The first possibility, and the one most often considered in the existing literature, is that this finding does not invalidate the neoclassical paradigm that requires expected returns to be a function solely of risk. Instead, it merely indicates that the CAPM is not the correct model of risk, and, more importantly, a better model of risk exists. As a consequence researchers have proposed more general risk models that better explain the cross section of expected returns.

    The second possibility is that the poor performance of the CAPM is a consequence of the fact that there is no relation between risk and return. That is, that expected returns are determined by non-risk based effects. The final possibility is that risk only partially explains expected returns, and that other, non-risk based factors, also explain expected returns. The results in this paper shed new light on the relative likelihood of these possibilities.

    The fact that we find that the factor models all statistically significantly outperform our \no model” benchmarks implies that the second possibility is unlikely. If there was no relation between risk and expected return, there would be no reason for the CAPM to best explain investors’ capital allocation decisions. The fact that it does, indicates that at least some investors do trade off risk and return. That leaves the question of whether the failure of the CAPM to explain the cross section of expected stock returns results because a better model of risk exists, or because factors other than risk also explain expected returns.

    Based on the evidence using return data, one might be tempted to conclude (after properly taking into account the data mining bias discussed in Harvey, Liu, and Zhu (2014)) that if multi-factor models do a superior job explaining the cross-section, they necessarily explain risk better. But this conclusion is premature. To see why, consider the following analogy. Rather than look for an alternative theory, early astronomers reacted to the inability of the Ptolemaic theory to explain the motion of the planets by “fixing” each observational inconsistency by adding an additional epicycle to the theory. By the time Copernicus proposed the correct theory that the Earth revolved around the Sun, the Ptolemaic theory had been fixed so many times it better explained the motion of the planets than the Copernican system.15 Similarly, although the extensions to the CAPM better explain the cross section of asset returns, it is hard to know, using traditional tests, whether these extensions represent true progress towards measuring risk
    or simply the asset pricing equivalent of an epicycle.

    Our results shed light on this question. By our measures, factor models do no better explaining investor behavior than the CAPM even though they nest the CAPM. This fact reduces the likelihood that the reason these models better explain the cross section of expected returns is because they are better risk models. This is a key advantage of our testing method. It can difierentiate between whether current extensions to the CAPM just improve the model’s it to existing data or whether they represent progress towards a better model of risk. The extensions of the CAPM model were proposed to better it returns, not ows. As such, ows provide a new set of moments that those models can be confronted with. Consequently, if the extension of the original model better explains mutual fund ows, this suggests that the extension does indeed represent progress towards a superior risk model. Conversely, if the extended model cannot better explain ows, then we should worry that the extension is the modern equivalent of an epicycle, an arbitrary fix designed simply to ensure that the model better explains the cross section of returns.

    Our method can also shed light on the third possibility, that expected returns might be a function of both risk and non-risk based factors. To conclude that a better risk model exists, one has to show that the part of the variation in asset returns not explained by the CAPM can be explained by variation in risk. This is what the ow of funds data allows us to do. If variation in asset returns that is not explained by the CAPM attracts ows, then one can conclude that this variation is not compensation for risk. Thus our method allows us to infer something existing tests of factor models cannot do. It allows us to determine whether or not a new factor that explains returns measures risk. What our results imply is that the factors that have been proposed do not measure additional risk not measured by the CAPM. What these factors actually do measure is clearly an important question for future research.

    5 Tests of the Robustness of our Results

    In this section we consider other possible alternative explanations for our results. first we look at the possibility that mutual fund fee changes might be part of the market equilibrating mechanism. Then we test the hypothesis that investors’ information sets

    15 Copernicus wrongly assumed that the planets followed circular orbits when in fact their orbits are ellipses. contain more than what is in past and present prices. Finally, we cut the data sample along two dimensions and examine whether our results change in the subsamples. Specifically, we examine whether our results change if we start the analysis in 1995 rather than 1978 and if we restrict attention to large return observations. In both cases we show that our results are unchanged in these subsamples.

    5.1 Fee Changes

    As argued in the introduction, capital ows are not the only mechanism that could equili- brate the mutual fund market. An alternative mechanism is for fund managers to adjust their fees to ensure that the fund’s alpha is zero. In fact, fee changes are rare, occurring in less than 4% of our observations, making it unlikely that fee changes play any role in equi- librating the mutual fund market. Nevertheless, in this section we will run a robustness check to make sure that fee changes do not play a role in explaining our results.

    The fees mutual funds charge are stable because they are specified in the fund’s prospectus, so theoretically, a change to the fund’s fee requires a change to the fund’s prospectus, a relatively costly endeavor. However, the fee in the prospectus actually specifies the maximum fee the fund is allowed to charge because funds are allowed to (and do) rebate some of their fees to investors. Thus, funds can change their fees by giving or discontinuing rebates. To rule out these rebates as a possible explanation of our results, we repeat the above analysis by assuming that fee changes are the primary way mutual fund markets equilibrate






    forward by these readers, once the factors were discovered, people started using them, and so the appropriate time period to compare the CAPM to these factor models is the post 1995 period. Of course, such a view raises interesting questions about the role of economic research. Rather than just trying to discover what asset pricing model people use, under this view, economic researchers also have a role teaching people what model they should be using. To see if there is any support for this hypothesis in the data, we rerun our tests in the sample that excludes data prior to 1995.

    Because the time series of this subsample covers just 16 years, we repeat the analysis using horizons of a year or less.18 Tables 7 and 8 report the results. They are quantita- tively very similar to the full sample, and qualitatively the same. At every horizon the performance of the factor models and the CAPM are statistically indistinguishable. At the 3 and 6 month horizon the CAPM actually outperforms both factor models. All 3 models all still significantly outperform the \no model” benchmarks. In addition, the dy- namic equilibrium models continue to perform poorly. In summary, there is no detectable evidence that the discovery of the value, size and momentum factors had any in uence on how investors measure risk.

    5.4 Restricting the Sample to Large Returns

    One important advantage of our method, which uses only the signs of ows and returns, is that it is robust to outliers. However, this also comes with the important potential lim- itation that we ignore the information contained in the magnitude of the outperformance and the ow of fund response. It is conceivable that investors might react differently to large and small return outperformance. For example, a small abnormal return might lead investors to update their priors of managerial performance only marginally. Assuming that investors face some cost to transact, it might not be profitable for investors to react to this information by adjusting their investment in the mutual fund. To examine the importance of this hypothesis, we rerun our tests in a subsample that does not include small return realizations.

    We focus on deviations from the market return, and begin by dropping all return observations that deviate from the market return by less than 0.1 standard deviation (of the panel of deviations from the market return). The first column of Table 9 reports the results of our earlier tests at the 3 month horizon in this subsample. 19The performance

    18 Because of the loss in data, at longer horizons the double clustered standard errors are so large that there is little power to differentiate between models.

    19 Results for the one year horizon are reported in the internet appendix to this paper. We choose to report the short horizon results because as before, the results for longer horizons have little statistica





    of all models increases relative to the full sample, but only marginally. The other columns in the table increase the window of dropped observations: 0.25, 0.5, 0.75 and 1 standard deviation. What is clear is that increasing the window substantially improves the ability of all models to explain ows. Table 10 reports the statistical significance in these subsamples of the test derived in Proposition 5. The results are again quantitatively similar to the main sample and qualitatively identical. The CAPM is statistically significantly better at explaining ows than the \no model” benchmarks, and none of the factor models statistically outperform the CAPM.
    It might seem reasonable to infer from the results in Table 9 that transaction costs do explain the overall poor performance of all the models in explaining ows. But caution is in order here. Although the CAPM does explain 75% of ow observations at the 1 standard deviation window, in this sample almost 80% of the data is discarded. It seems hard to believe that transaction costs are so high that only the 20% most extreme observations contain enough information to be worth transacting on.

    6 Conclusion

    The field of asset pricing is primarily concerned with the question of how to compute the cost of capital for investment opportunities. Because the net present value of a long-dated investment opportunity is very sensitive to assumptions regarding the cost of capital, computing this cost of capital correctly is of first order importance. Since the initial development of the Capital Asset Pricing Model, a large number of potential return anomalies relative to that model have been uncovered. These anomalies have motivated researchers to develop improved models that \explain” each anomaly as a risk factor. As a consequence, in many (if not most) research studies these factors and their exposures are included as part of the cost of capital calculation. In this paper we examine the validity of this approach to calculating the cost of capital.

    The main contribution of this paper is a new way of testing the validity of an asset pricing model. Instead of following the common practice in the literature which relies on moment conditions related to returns, we use mutual fund capital ow data. Our study is motivated by revealed preference theory: if the asset pricing model under consideration correctly prices risk, then investors must be using it, and must be allocating their money based on that risk model. Consistent with this theory, we find that investors’ capital ows in and out of mutual funds does reliably distinguish between asset pricing models. We find that the CAPM outperforms all extensions to the original model, which implies,



    given our current level of knowledge, that it is still the best method to use to compute the cost of capital of an investment opportunity. This observation is consistent with actual experience. Despite the empirical shortcomings of the CAPM, Graham and Harvey (2001) find that it is the dominant model used by corporations to make investment decisions.
    The results in the paper raise a number of puzzles. First, and foremost, there is the apparent inconsistency that the CAPM does a poor job explaining cross sectional variation in expected returns even though investors appear use the CAPM beta to measure risk. Explaining this puzzling fact is an important area for future research.

    A second puzzle that bears investigating is the growth in the last 20 years of value and growth mutual funds. If, indeed, investors measure risk using the CAPM beta, it is unclear why they would find investing in such funds attractive. There are a number of possibilities. First, investors might see these funds as a convenient way to characterize CAPM beta risk. Why investors would use these criteria rather than beta itself is unclear. If this explanation is correct, the answer is most likely related to the same reason the CAPM does such a poor job in the cross-section. Another possibility is that value and growth funds are not riskier and so offer investors a convenient way to invest in positive net present value strategies. But this explanation begs the question of why the competition between these funds has not eliminated such opportunities. It is quite likely that by separately investigating what drives ows into and out of these funds, new light can be shed on what motivates investors to invest in these funds.

    Finally, there is the question of what drives the fraction of ows that are unrelated to CAPM beta risk. A thorough investigation of what exactly drives these ows is likely to be highly informative about how risk is incorporated into asset prices.

    Perhaps the most important implication of our paper is that it highlights the use- fulness and power of mutual fund data when addressing general asset pricing questions. Mutual fund data provides insights into questions that stock market data cannot. Be- cause the market for mutual funds equilibrates through capital ows instead of prices, we can directly observe investors’ investment decisions. That allows us to infer their risk preferences from their actions. The observability of these choices and what this implies for investor preferences has remained largely unexplored in the literature.


    A Proofs
    A.1 Proof of Proposition 2

    The denominator of (6) is positive so we need to show that the numerator is positive as well. Conditioning on the information set at each point in time gives the following expression for the numerator:








    Bansal, R., and A. Yaron (2004): \Risks for the Long Run: A Potential Resolution of Asset Pricing Puzzles,” Journal of Finance , 59(4), 1481-1509.

    Barber, B. M., X. Huang, and T. Odean (2014): \What Risk Factors Matter to Investors? Evidence from Mutual Fund Flows,” Available at SSRN: http://ssrn.com/abstract=2408231 .

    Berk, J. B., and R. C. Green (2004): \Mutual Fund Flows and Performance in Rational Markets,” Journal of Political Economy , 112(6), 1269-1295.

    Berk, J. B., and I. Tonks (2007): \Return Persistence and Fund Flows in the Worst Performing Mutual Funds,” Unpubished Working Paper NBER No. 13042.

    Berk, J. B., and J. H. van Binsbergen (2013): \Measuring Skill in the Mutual Fund Industry,” Journal of Financial Economics , forthcoming.

    Breeden, D. T. (1979): \An intertemporal asset pricing model with stochastic con- sumption and investment opportunities,” Journal of Financial Economics , 7(3), 265-296.

    Campbell, J. Y., and J. H. Cochrane (1999): \By Force of Habit: A Consumption- Based Explanation of Aggregate Stock Market Behavior,” Journal of Political Economy , 107, 205-251.

    Carhart, M. M. (1997): \On Persistence in Mutual Fund Performance,” Journal of Finance , 52, 57-82. Chevalier, J., and G. Ellison (1997): \Risk Taking by Mututal Funds as a Response to Incentives,” Journal of Political Economy , 106, 1167-1200.

    Clifford, C. P., J. A. Fulkerson, B. D. Jordan, and S. Waldman (2013): \Risk and Fund Flows,” Available at SSRN: http://ssrn.com/abstract=1752362 .

    Epstein, L. G., and S. E. Zin (1991): \Substitution, Risk Aversion, and the Tempo- ral Behavior of Consumption and Asset Returns: An Empirical Analysis,” Journal of Political Economy , 99, 263-286.

    Fama, E. F., and K. R. French (1993): \Common Risk Factors in the Returns on Stocks and Bonds,” Journal of Financial Economics , 33(1), 3-56.

    French, K. R. (2008): \The Cost of Active Investing,” Journal of Finance , 63(4), 1537-1573.

    Graham, J. R., and C. R. Harvey (2001): \The Theory and Practice of Corporate Finance: Evidence from the Field,” Journal of Financial Economics , 60, 187-243.

    Grossman, S. (1976): \On the Efficiency of Competitive Stock Markets Where Trades Have Diverse Information,” Journal of Finance , 31(2), 573-585.

    Grossman, S. J., and J. E. Stiglitz (1980): \On the Impossibility of Informationally Efficient Markets,” The American Economic Review , 70(3), 393-408.

    Guercio, D. D., and P. A. Tkac (2002): \The Determinants of the Flow of Funds of Managed Portfolios: Mutual Funds vs. Pension Funds,” Journal of Financial and Quantitative Analysis , 37(4), 523-557.

    Harvey, C. R., Y. Liu, and H. Zhu (2014): \. . . and the Cross-Section of Expected Returns,” Review of Financial Studies , Forthcoming.

    Kreps, D. M., and E. L. Porteus (1978): \Temporal Resolution of Uncertainty and Dynamic Choice Theory,” Econometrica , 46(1), 185-200.

    Lintner, J. (1965): \The Valuation of Risk Assets and the Selection of Risky Invest- ments in Stock Portfolios and Capital Budgets,” The
    Review of Economics and Statis- tics , 47(1), 13-37.

    Lustig, H., S. Van Nieuwerburgh, and A. Verdelhan (2013): \The Wealth- Consumption Ratio,” Review of Asset Pricing Studies , 3(1), 38-94.

    Mamaysky, H., M. Spiegel, and H. Zhang (2008): \Estimating the Dynamics of Mutual Fund Alphas and Betas,” Review of Financial Studies , 21(1), 233-264.

    Merton, R. C. (1973): \Optimum Consumption and Portfolio Rules in a Continuous- Time Model,” Journal of Economic Theory , 3, 373-413.

    Milgrom, P., and N. Stokey (1982): \Information, Trade, and Common Knowledge,” Journal of Economic Theory , 26(1), 17-27.

    Mossin, J. (1966): \Equilibrium in a Capital Asset Market,” Econometrica , 34(4), 768-783.

    Pastor, L., and R. F. Stambaugh (2012): \On the Size of the Active Management Industry,” Journal of Political Economy , 120, 740-781.

    Pastor, L., R. F. Stambaugh, and L. A. Taylor (2015): \Scale and Skill in Active Management,” Journal of Financial Economics , 116, 23-45.

    Petersen, M. A. (2009): \Estimating Standard Errors in Finance Panel Data Sets: Comparing Approaches,” Review of Financial Studies , 22(1), 435-480.

    Ross, S. A. (1976): \The arbitrage theory of capital asset pricing,” Journal of Economic Theory , 13(3), 341-360.

    Sharpe, W. F. (1964): \Capital Asset Prices: A Theory of Market Equilibrium under Conditions of Risk,” Journal of Finance , 19(3), 425-442.

    Sirri, E. R., and P. Tufano (1998): \Costly Search and Mutual Fund Flows,” Journal of Finance , 53(5), 1589-1622.

    Stambaugh, R. F. (2014): \Investment Noise and Trends,” Journal of Finance , 69, 1415-1453.

    Thompson, S. B. (2011): \Simple formulas for standard errors that cluster by both firm and time,” Journal of Financial Economics , 99(1), 1-10.

    Treynor, J. (1961): \Toward a Theory of the Market Value of Risky Assets,”

  2. Edwin J. Elton Slides


    Characteristics and Performance

    Edwin J Elton
    Martin J Gruber
    NYU Stern School of Business

    Andre de Souza
    Christopher R Blake
    Fordham University

    What We Know:

    -There is a vast literature that shows that participants in 401(k) plans make suboptimal decisions.
    -Participants change their asset allocation depending on choices offered.
    -Participants infrequently revise their allocations.
    -Participants chase past return.
    -Participants are influenced by default choice they are offered.
    -Participants don’t invest enough.

    Use of Target Date Funds in 401(k) Plans – 2013

    -72% offer Target Date Funds
    -41% 401(k) investors hold Target Date Funds
    -20% 401(k) assets in Target Date Funds
    -43% assets of new employees in Target Date Funds
    -Growth 2008 — 2012 160B to 481B


    -Constant Proportions — Merton Samuelson
    -Decreasing: Bodie Merton Samuelson
    Campbell and Viceira
    Cocco Gomes Maenhout
    -Increasing: Shiller

    1.Poterba Rauh Venti Wise

    Our Study


    Vanguard Planned Glide Path



    1.Restricted to 2035 or 2030 target date
       a)Different target dates are different mixtures of same funds
       b)Since the same managers manage all dates, deviations from the glide path are similar across different target dates
    2.Fifty families offering target date funds
    3.229 different share classes
    4.Monthly data on all of their holdings

    2035 Debt Equity Choices



      -Average  17 funds
      -68%    10 or more funds
      -24%    25 or more funds
    -Which Family
    -63% all from same family
    -Outside almost always passive and not offered by family
    -13.7% outside active

    TDF holdings of five types of specialized underlying funds (in percent)


    Other categories

    -19%    sector bets
    -8%    county bets
    -4%    long short funds

    Expense ratios across target share classes


    Asset classes of shares held

    -56%    institutional
    -6.5%    retirement
    -15.93%   master trusts

    Fees on Do-It-Yourself


    Measuring Fund Selection Ability


    Two Problems

    1.TDFs designed to decrease risk over time. Betas will change over time. Betas and alphas will be misspecified if a time series regression is fit.
    2.We need to select the indexes.

    Changing Beta

    -We employ the bottom up approach of Elton, Gruber and Blake to solve the problem of changing betas.
    -Compute betas and alphas each month for each of the holding of a TDF and multiply by the fraction of the TDF
    portfolio in that asset and some over all assets.
    -This is then averaged over time for each TDF and averaged across TDFs.

    Identifying the Indexes

    – One set of indexes for each Morningstar classification
    – Fund of funds held funds in 12 classifications. For example, for domestic stock funds we used Fama French Carhart indexes.
    – For domestic bond funds we used a general bond index, a mortgage index and a high yield index.

    Do TDFs show selection ability?

    -Before TDF expenses   α = -20 BP   Expense 60 BP
    -For investors in general  α = -70 BP   Expense 110 BP
    -The difference in alpha = the difference in expenses
    -Funds of funds select no better than investors in general. They look better not because of better selection but
    because of selecting share classes with lower expenses.

    How do they do for their investors? (including TDF expenses)

    –20 bp plus -53 bp at the TDF level = -73 bp
    -About the same or slightly worse than investors in mutual funds

    We examined some simple strategies

    1.Hold starting weights constant over life of the fund of funds. Invest only in the funds held in 5 major categories: domestic and international stocks, and domestic and international bonds. Obtain higher Sharpe ratios 67% of the time, the difference is statistically significant at 5% level.
    2.Same procedure with index funds results stronger. The investor world is better off using a buy and hold strategy in the five major investment types. Even better off using index funds.

    Shareholder Objectives and Family Objectives

    -Why don’t TDFs do better? The bulk of their holdings are in funds offered by the same family as the TDF.
    -They should benefit from having non-public information.
    -Offsetting factor is conflict of interest — agency problem

    -69.9% of TDFs (720 cases) added a fund with at least one alternative in the same Morningstar classification in the
    same fund family.
    -The average number of alternative was 3.9.
    -Where may conflict arise
      -New funds
      -Small funds
      -High management fees
      -Funds with high outflows

    New Funds

    -In 15% of cases could have selected a fund that existed less than three months.
    -Selected it 72% of the time if random 34% of the time. Annual alphas over next three years 86 bp lower than alternative ( t = 2.14).
    -Less than 1 year selected 57% of the time while if random 34% of the time.

      Tendency to add new funds and their subsequent
        performance was lower than alternatives.

    Selection of small funds (only after 6 months in existence)


    Management Fee

    -No evidence that selected funds with high management fee on average
    -13 cases management fee 40bp higher α = -.256 t = 2.4
    -33 cases management fee 30bp higher α = -1.15 t = 1.4
    -Funds with cash outflow — not significant


    1.Investors pay only a small amount in expenses above what they would pay to replicate TDFs.
    2.TDFs show very little skill in selecting funds they hold but they select low expense classes of funds.
    3.TDFs hurt performance by
      a)Not following their glide paths
      b)Including esoteric investments
    4.Some TDFs pursue fund family objectives to the detriment of investors’ objectives.
    5.TDFs would do a lot better if they held index funds.

  3. Tarun Chordia Paper

    Cross-Sectional Asset Pricing with Individual Stocks: Betas versus Characteristics*

    Tarun Chordia, Amit Goyal, and Jay Shanken

    January 2015


    We develop a methodology for bias-corrected return-premium estimation from cross-sectional regressions of individual stock returns on betas and characteristics. Over the period from July 1963 to December 2013, there is some evidence of positive beta premiums on the profitability and investment factors of Fama and French (2014), a negative premium on the size factor and a less robust positive premium on the market, but no reliable pricing evidence for the book-to-market and momentum factors. Firm characteristics consistently explain a much larger proportion of variation in estimated expected returns than factor loadings, however, even with all six factors included in the model.


    *We thank Joe Chen, Wayne Ferson, Ken Singleton, and seminar participants at the Financial Research Association Meetings, Frontiers of Finance Conference at Warwick Business School, Deakin University, Erasmus University, Goethe University Frankfurt, Laval University, McGill University, Singapore Management University, State University of New York at Buffalo, Tilburg University, University of Missouri, and the University of Washington summer conference for helpful suggestions. Special thanks to Jon Lewellen for his insightful comments. Amit Goyal would like to thank Rajna Gibson for her support through her NCCR-FINRISK project.

    A fundamental paradigm in finance is that of risk and return: riskier assets should earn higher expected returns. It is the systematic or nondiversifiable risk that shouldbe priced, and under the Capital Asset Pricing Model (CAPM) of Sharpe(1964), Lintner (1965), and Mossin (1966) this systematic risk is measured by an asset’s market beta. While Black, Jensen, and Scholes (1972) and Fama and MacBeth (1973) do find a significant positive cross-sectional relation between security betas and expected returns, more recently Fama and French (1992) and others find that the relation between betas and returns is negative, though not reliably different from zero. This calls into question the link betweenrisk and expected returns.

    There is also considerable evidence of cross-sectional patterns (so-called anomalies) in stock returns that raises doubts about the risk-return paradigm. Specifically, price momentum, documented by Jegadeesh and Titman (1993), represents the strong abnormal performance of past winners relative to past losers. The size and book-to-market effects have been empirically established by, among others, Fama and French (1992). In particular, small market capitalization stock returns have historically exceeded big market capitalization stock returns, and high book-to-market (value) stocks have outperformed their low book-to-market (growth) counterparts. Brennan, Chordia, and Subrahmanyam (1998) find that investments based on anomalies result in reward-to-risk (Sharpe) ratios that are about three times as high as thatobtained by investing in the market, too large it would seem, to be consistent with a risk-return model (also see MacKinlay (1995)).

    The behavioral finance literature points to psychological biases on the part of investors to explain the breakdown of the risk-return relationship. In contrast, Fama and French (1993) propose a three-factor model that includes risk factors proxying for the size- and value-effects, in addition to the market excess-return factor, Mkt.The size factor, SMB, is a return spread between small firms and big firms, while the value factor, HML, is a return spread between high and low book-to-market stocks. There is controversy in the literature as to whether these two additional factors are really riskfactors, however, i.e., whether the factors can be viewed as hedge portfolios in an intertemporal CAPM along the lines of Merton (1973). Greater still, we suspect, is skepticism about a risk-based interpretation of the momentum factor MOM. This (winner-loser) spread factor is often included in a four-factor model along with the three Fama and French (1993) factors, e.g., Carhart (1997) and Fama and French (2012). More recently, Fama and French (2014) have proposed a five-factor model that adds CMA (conservative minus aggressive investment) and RMW (robust minus weakprofitability) factors to the original three.

    While some researchers are inclined to viewexpected return variation associated with factor loadings (betas) as dueto risk, and variation captured by characteristics like book-to-market as due to mispricing, we believe that a more agnostic perspective on this issue is appropriate. One reason is that the betas on an ex-ante efficient portfolio(a potential “factor”) will alwaysfully “explain” expected returns as a mathematical proposition (see Roll (1977)), whatever the nature of the underlying economic process. This makes it difficult to infer that a beta effect is truly driven by economic risk unless there is evidence that the factor correlates with some plausiblenotion of aggregate marginal utility in an intertemporal CAPM or other economic setting.

    For the usual spread factors, it is also important to recognize that there is a mechanical relation between, say, the book-to-market ratio and loadings on HML: a weighted average of the loadings for stocks in the high book-to-market portfolio must exceed that for stocks in the low book-to-market portfolio.Therefore, the relation between loadings and expected returns can be mechanical as well. In fact, Ferson, Sarkissian, and Simin (1998) construct an example in which expected returns are determined entirely by a characteristic, but one that is nearly perfectly correlated with loadings on the associated spread factor. In general, though, there need not be a simple relation between loadings and characteristicsat the individual stock level. For example, at the end of 2013, Comcast’s book-to-marketratio of 3.4 placed it at the 99thpercentile, extreme value territory, while its negative loading on the HML factor was at the 30thpercentile, suggestive of a growth tilt.Empirically, we find relatively low correlations (less than 0.5) between characteristics and the corresponding loadings, even adjusting for estimation-error noise. Therefore, it is legitimate to ask whether the underlying firm characteristics or the factor loadings do a better job of tracking expected returns in the cross-section. Answering this question is the main objective of our paper.

    1 See related work by Haugen and Baker (1996), Titman, Wei, and Xie (2004) and Cooper, Gulen, Schill (2008) and Hou, Xue, and Zhang (2014) among others. 2

    The regression of HML on the Fama-French factors must produce a perfect fit, with a loading of one on itself and zero on the other factors. Since the HML loading equals the difference between the value (H) and growth (L) portfolio loadings, that difference must equal one. But, ofcourse, each of these portfolio loadings is a weighted average of the loadings for the stocks in the portfolio.

    While the economic interpretation of beta pricing can be unclear, determining the underlying causation for the cross-sectional explanatory power of a characteristic can likewise be challenging. For one thing, it is hard to rule out the possibility that the significance of a stock characteristic reflects the fact that it happens to line up well with the betas on some omitted risk factor. But we need not think solely in terms ofrisk. For example, Fama and French (2014) use observations about the standard discounted cash flow valuation equation to derive predictions about the relation between expected returns and stock characteristics: market equity, the book-to-market ratio, and the expected valuesof profitability and investment.This approach is more in the spirit of an implied cost of capital and, as they note, the predictions are the same whether the price is rational or irrational.

    Understanding what determines observed pricing patterns is undoubtedly important, but it is not the focus of this paper. Whatever the appropriate economic interpretation, important gaps remain in our knowledge about the relevant empiricalrelations. We fill some of those gaps. Whereas Fama and French (1993) and Davis, Fama, and French (2000) argue that it is factor loadings that explain expected returns, Daniel and Titman (1997) contend that it is characteristics. On the other hand, Brennan, Chordia, and Subrahmanyam (1998) present evidence that firm characteristics explain deviationsfrom the three-factor model, whereas Avramov and Chordia (2006) find that size and book-to-market have no incremental effect (momentum and liquidity do) when the model’s loadings are time varying. However, despite the considerable literature on this subject, we know of no study that directly evaluates how much of the cross-sectional variation in expected returns is accounted for by betas and how much by characteristics in a head-to-head competition. The main goal of this paper is to provide evidence on this issue using appropriate econometric methods.

    A number of methodological issues arise in this setting. Indeed, the lack of a consensus on the betas versus characteristics question stems, in part, from issues of experimental design. For example, Brennan Chordia, and Subrahmanyam and Avramov and Chordia work with individual stocks and employ risk-adjustedreturns as the dependent variable in their cross-sectional regressions (CSRs). In computing the risk-adjustment, the prices of riskfor the given

    3Similarly, Liu, Whited and Zhang (2009) relate expectedreturns to stock characteristics in a framework based on q-theory. 4

    While linear functions of the lagged values of profitability and investment may serve as rough proxies for the required expectations, a justification for substituting the corresponding factor loadings for the characteristics in this discounted cash flow (or the related q-theoretic) context has, to our knowledge, yet to be articulated.

    factors are constrained to equal the factor means and the zero-beta rate is taken to be the risk-free rate. A virtue of this approachis that the well-known errors-in-variables (EIV) problem is avoided since the betas do not serve as explanatory variables. However, while this can be useful for the purpose of model testing, the relative contributions of loadings and characteristics cannot be inferred from such an experiment.

    Unlike these papers, we do not impose restrictions on the prices of risk or document patterns of model misspecification. Rather, we evaluate the role of loadings and of characteristics in the cross-sectional return relation that best fits the data when both are included as explanatory variables. Since (excess) returns,not risk-adjusted returns, serve as the dependent variable, in this context, it is important to address the EIV problem. Typically, in asset pricing empirical work, stocks are grouped into portfolios to improve the estimates of beta and thereby mitigate the EIV problem. However, the particular method ofportfolio grouping can dramatically influence the results (see Lo and MacKinlay (1990) and Lewellen, Nagel, and Shanken (2010)). Using individual stocks as testassets avoids this somewhat arbitrary element.

    Ang, Liu, and Schwarz (2010) also advocate the use of individual stocks, but from a statistical efficiency perspective, arguing that greater dispersion in the cross-section of factor loadings reduces the variability of the risk-premium estimator. Simulation evidence in Kim (1995) indicates, though, that mean-squared error is higher with individual stocksthan it is with portfolios, due to the greater small-sample bias, unless the risk premium estimator is corrected for EIV bias.In this paper, we employ EIV corrections that build on the early work of Litzenberger and Ramaswamy (1979), perhaps the first paper to argue for the use of individual stocks, and extensions by Shanken (1992). We alsocorrect for a potential bias that can arise when characteristics are time-varying and influencedby past returns, as is the case for size and several other characteristics. This influence can induce cross-sectionacorrelation between characteristics and the measurement errors in betas, a complication that, to our knowledge, has not previously been considered.

    We conduct our tests for a comprehensive sample of NYSE, AMEX, and NASDAQ stocks over the period 1963-2013. The independent variables in our CSRs consist of loadings as

    Ang, Liu, and Schwarz (2010) use an MLE framework with constant betas to develop analytical formulas for EIV correction to standard errors, but they do not address the bias in the estimated coefficients. Also, they seem to implicitly assume that the factor mean is known, which might explain the huge t-statistics that they report (see Jagannathan and Wang (2002) for a similar critique in the context of SDF models).

    well as firm characteristics. The asset pricing model betas examined in the paper are those of the CAPM, the Fama-French three- and five-factor models, and models that include a momentum factor along with the Fama-French factors. The firm characteristics that we examine are the “classic” characteristics firm size, book-to-market ratio, and pastsix-month returns, and the additional characteristics investment and the ratio of operating profitability to book equity.

    The results point to some evidence of a positive beta premium on the profitability (RMW) and investment factors (CMA), a negative premium on the size factor (SMB), and a less robust positive premium on the market (multifactor, not CAPM beta), but no evidence for the book-to-market (HML) or the momentum (MOM) factors. Also, the estimated zero-beta rates exceed the risk-free rate by at least 6 percentage points (annualized), even with the additional factors and characteristics in the models. Our main finding is that firm characteristics consistently explain a much larger fraction of the variation in estimated expected returns than factor loadings, even in the case of the six-factor model that includes the Fama-French five-factor model augmented by the momentum factor. Moreover, all of the characteristics are reliably different from zero, with the familiar signs.

    The rest of the paper is organized as follows. The next section presentsthe methodology. Section II provides simulation evidence on the finite-sample behavior of the EIV correction that we employ. Section III presents the data and Section IV discusses the results. Section V explores the impact of time-varying premia. Section VI concludes.

    I. Methodology

    We run CSRs of individual stock returns on their factor loadings and characteristics, correcting for the biases discussed above.












    Panel B of Table 2 presents the time-series averages of the cross-sectional correlations between the different factor loadings and the characteristics. These are also corrected for EIV bias. As expected, the beta for SMB is negatively correlated with firm size, the HML beta is positively correlated with the book-to-market ratio, the profitability beta is positively correlated with operating profits and the investment beta is negatively correlated with investments. The respective correlations are −0.43, 0.33, 0.26, and−0.12 (−0.33, 0.22, 0.16, and −0.08 without EIV correction). Thus, there is considerable independent variation of the characteristics and corresponding factor loadings, permitting identification of their separateeffects on expected return.

    In addition, size and book-to-market have a correlation of −0.31 and the past six-month return has a correlation of −0.25 with book-to-market. Firm size is positively correlated and book-to-market is negatively correlated with operating profits and investments. The correlation between profitability and investment is 0.27, suggesting that the profitable firms have better opportunities as well as access to internal or external financing.

    IV. Cross-sectional Results

    We present results for the one-factor CAPM, the Fama and French (1993) three-factor model FF3, the four-factor model which augmentsthe Fama and French (1993) model with the momentum factor (MOM), the five-factor Fama and French (2014)model FF5, and the six-factor model which augments the Fama and French (2014)model with the momentum factor. Separate analysis of these factor models helps in analyzing the additional importance of the various factors. We present the standard Fama and MacBeth (1973) coefficients as well as bias-corrected coefficients side by side in all our results. This facilitates an evaluation ofthe importance of bias correction to the estimated premia. Finally, we report the Fama and MacBeth (1973) t-statistics.

    In the next subsection, we present the results for the sample of all stocks and later we will present the results for the sample of non-microcap stocks.

    IV.A. All stocks

    Since our goal in this paper is to examine the relative contributions offactor loadings and characteristics to expected returns, we will present results for the Fama-MacBeth (1973) regressions that include characteristics along with the betas. However, we have examined the 17 factor models in the absence of the characteristics and first discuss these results.15Across all models, from the single-factor CAPM to the six-factor model, the risk premium on the market is negative and statistically insignificant. The risk premiums on SMB, MOM, and RMW are also statistically indistinguishable from zero. This contrasts with the significance of the corresponding means for these factors in Fama and French (2014) and suggests that the associated expected return relation is violated for these models. The risk premium on HML is positive and significant in FF3 and FF5, but is no longer significant when MOM is included in the four- and six-factor models. For instance, the risk premium estimate for HML is 0.35% per month in FF3. The risk premium on CMA is positive and significant. In FF5 this risk premium is 0.27% and it is 0.23% per month in the six-factor model.

    Panel A of Table 3 reports results for the factor models when the characteristics Sz, B/M, and Ret6 are included in the Fama-MacBeth regressions. Panel B of Table 3 adds the firm-level characteristics Profit and Invest. With uncorrelated factors, estimation error in the betas would bias all of the estimated risk premiums towardzero. While there is some correlation between the factors, we find nonetheless that correcting the EIV bias generally increases the risk premium estimates, sometimes by over 100%.

    Consider first the results in Panel A. In the one-factor and FF3 models, the market beta is not priced when the characteristics are included in the CSRs.16In the case of FF5, the market risk premium is 0.42% per month with a t-statistic of 2.16. For comparison, the sample average market excess return is0.50% per month. The beta premium on SMB is negative across all factor models despite its positive sample mean. For instance, in FF3, the premium is −0.29% per month with a t-statistic of −2.21. The negative premium may seem odd, but it is importantto note that this premium captures the partialeffect on return of the SMB beta, controlling for the size characteristic and the other variables (similarly for the other factors). With nonzero characteristic premiums, the usual restriction that the beta premiums equal the factor means need not hold under the cross-sectional model.

    Unlike the case where the firm-level characteristics were not included in the regressions, the beta premium for HML is nowno longer significant, possibly due to competition between the HML beta and the book-to-market ratio. The beta premiums on RMW and CMA are both

    These results are available upon request. 16

    For conciseness, we refer to FF3 or FF5 to identify the factors, but the models always include characteristics as well from this point on.

    significant, with respective estimates of 0.31 (t-statistic=2.46) and 0.22 (t-statistic=2.33)in FF5 and estimates of 0.26 (t-statistic=2.34) and 0.18 (t-statistic=2.00) in the case of the six-factor model.

    The intercepts in second-pass regressions are around 6% to 8% per year, with t-statistics of about four or more. Since characteristics are measured as deviations from NYSE means, the intercepts can be interpreted as the expected return on a zero-beta portfolio with weighted characteristics equal to the NYSE average. Such large differences between the zero-beta rate and the risk-free rate, common in the literature goingback to Black, Jensen and Scholes (1972) and Fama and MacBeth (1973), are hard to fully reconcile with more general versions of the CAPM that incorporate restrictions on borrowing.

    The premia on firm characteristics are also noteworthy-as usual, large firms earn lower returns, value firms earn higher returns, and firms with higher past returns continue to earn higher returns and the estimates are statisticallysignificant. In economic terms, for the bias- corrected six-factor model, a one standard deviation increase in firm size decreases monthly returns by 28 basis points, a one standard deviation increase in the book-to-market ratio leads to an increase in returns of 24 basis points per month, and a standard deviation increase in the past six month returns raises returns by 43 basis points per month.

    The CAPM and FF3 results for the firm characteristics are similar to those in Brennan, Chordia, and Subrahmanyam (1998) and imply rejection of those beta-pricing models. However, Brennan, Chordia, and Subrahmanyam relate beta-adjustedreturns to characteristics, with risk premiums restricted to equal the factor means and the zero-beta rate equal to the riskless rate. In contrast, we let the loadings and characteristics compete without constraints on the risk premia or the zero-beta rate. What we learn from the new results is that the premia on firm characteristics (specifically Sz, B/M, and Ret6) remain significant even without those constraints and the addition of the factors RMW, CMA and MOM.

    There is a controversy in the literature about the interpretation of the size- and value-effects. Fama and French (1993) and Davis, Fama, and French (2000) argue that these empirical phenomena point to the existence of other risk factors, proxied for by SMB and HML. In other words, these studies claim that factor loadingsexplain cross-sectional variation in expected returns. Daniel and Titman (1997), on the other hand, show that portfolios of firms with similar

    17See also Frazzini and Pedersen (2013) who show that high zero-beta returns are obtained for most countries.

    characteristics but different loadings on the Fama and French factors have similar average returns. They conclude from this finding that itis characteristics that drive cross-sectional variation in expected returns. None of the studies, however, runs a direct horse race between these two competing hypotheses. Our approach using individual stocks is designed to directly address this controversy. We allow both factor loadings and characteristics to jointly explain the cross-section of returns.

    The average cross-sectional adj-R2 values (not reported) are higher when the characteristics are included as independent variables in the cross-sectional regressions than when they are not. This might seem to provide prima-facie evidence about the additional explanatory power of characteristics (beyond market beta) in the cross-section of returns. However, one cannot draw conclusions about the relative explanatory power ofcharacteristics and betas by comparing these adj-R2s. To see this, consider a scenario in which the ex-postcoefficient on an explanatory variable is positive (+x, for instance) and significant in half the sample and negative (−x, for instance) and significant in the other half. The computed average of the cross-sectional adj-R2s could be high even though the coefficient is zero on average and carries no ex-ante premium.

    To address these problems withadj-R2s, it is common in the literature to report the adj-R2from a single regression of averagereturns on unconditional betas for a set of test asset portfolios (see Kan, Robotti, and Shanken (2013)). This is problematic in our context, as our regressions are for individual stocks with an unbalanced panel dataset. One approach would be to report theadj-R2for a regression of average returns on average betas and average characteristics.However, a momentum characteristic averaged over time would display minimal cross-sectional variation and, therefore, its highly significant explanatory powerfor expected returns would essentially be neglected by such an adj-R2measure. For these reasons, we do not report adj-R2sfor our regressions. Instead, we report measures of the relative contributions of loadings or characteristics make toward explaining the variation in expected returns, as discussed in Section I.C.

    The last four rows of Table 3 present the contributions made by factor loadings and characteristics, followed by the contribution differences and a 95% bootstrap confidence interval

    for the latter computedfollowing the procedure in Section I.C.Focusing on the bias-corrected coefficients, we find that the CAPM beta explains only 0.8% and the characteristics explain 104.2% of the cross-sectional variation; in the case of FF3, the betas explain 12% and the characteristics explain 110%; with the four-factor model, the betas explain about 11% and the characteristics 109%; with FF5, it is betas 31% and characteristics 97%; and for the six-factor model, betas explain 24% and characteristics 102%.Clearly, the characteristics explain an overwhelming majority of the variation in expected returns. This is confirmed by the 95% confidence intervals, which, in each case, indicatethat the difference is significantly positive at the 5% level. The best showing for beta is inFF5, but even there the point estimate of the difference is 67% and the confidence interval indicates a difference of at least 36%.

    The findings when we include the additional firm-level characteristics Profit and Invest in Panel B of Table 3 are very similar. The risk premium on the market betais not significantly different from zero in the CAPM and the four-factor model, but it is significant in the other cases. For instance, the market risk premium is 0.47% in FF5. The premiums for SMB are still negative, but significant at the 5% level only in the case of FF5. The premium on RMW remains significant in the five- and six-factor models, but the premiums on HML, MOM and CMA are never reliably different from zero. As compared toPanel A, the CMA beta loses its significance, probably due to competition with the corresponding characteristic Invest.

    Even with the additional factorloadings included, the characteristic premiums for size, book-to-market, past six-month return, profitability and investment growth are all consistent with the prior literature and highly significant. In economic terms, for the six-factor model, a one standard deviation increase in Sz, B/M, Ret6, Profit, and Invest increases returns by −31, 23, 40, 21, and −22 basis points per month, respectively. Once again, the characteristics explain most of the variation in expected returns for this specification. Similarly, the bootstrap confidence intervals are consistent with a significantly larger fraction of the variation in returns being explained by characteristics as compared to the factor loadings. IV.B. Non-microcap stocks

    18A comparison between our results and those in Daniel and Titman (1997) is complicated by the fact that we use past returns as an additional characteristic in our cross-sectional regressions.
    19Recall that the total percent explained can differ from 100% because of correlation between the components of expected returns due to beta
    s and due to characteristics.

    Next, we turn our attention to non-microcap stocks which, following Fama and French (2008), includes all stocks whose market capitalization is larger than that of the 20thpercentile of NYSE stocks. Table 4 shows the second-stage CSR beta-premium estimates for the different models as well as the characteristic premiums. Panel A presents the results with the characteristics Sz, B/M, and Ret6 included in the regressions and Panel B includes Profit and Invest along with the characteristics in Panel A. The bias-corrected beta premiums for the market, HML and MOM are not statistically significant in either of the two Panels. However, the premium on SMB is significantly negative in the four-, five-, and six-factor models in Panel A and in the four- and six-factor models in Panel B of Table 4. The premiums on the RMW and CMA betas are generally significant in Panel A of Table 4, but in Panel B only the premium of RMW is significant and that only in the six-factor model. This suggests that the factors RMW and CMA are robustly priced only in the absence of the firm-level characteristics Profit and Invest. All of the characteristic premiums, i.e., those for size, book-to-market, past return, profitability and investment, are statistically and economically significant. The bias-corrected estimates all have t-statistics greater than two (and often much larger) in both panels of Table 4.

    The economic magnitudes and statistical significance reported thus far indicate that both factor loadings and characteristics matter for non-microcap stocks.But how much variation does each explain? Note, first, that the contribution of factor loadingsto the variation in expected returns, as shown in Table 4, increases with the number of factors in the asset pricing models. This contrasts with the all-stock results, where the contribution of betas declined with the addition of MOM to FF5. However, as in Table 3, the contribution of characteristics far exceeds that of the factor loadings in all cases presented in Panels A and B ofTable 4. The corresponding differences are statistically significant except for FF5 in Panel A, where the difference of 39.7% is not quite distinguishable fromzero at the 5% level, given the wide confidence interval.

    IV.D. Additional robustness checks

    Recall that, in implementing EIV correction, we to switch to OLS estimation in a given month if the “correction” leads to an X’Xestimate that is not positive definite or if the premium estimator is an “outlier,” i.e., differs from the factor realization by more than 20%. These issues are encountered only with four or more factors and occur in at most nine months with less than six factors. For the six-factor model, there are 23not-positive-definite months and eight outliers.

    We have also explored 10% and 50% outlier criteria. Not surprisingly, there are many more outliers with 10%, but our main conclusion, that characteristics explain much more variation in expected returns than betas is not sensitive to the treatment of outliers. Individual beta-premium coefficients are occasionally materially affected, however. For example, the premium for RMW in the five-factor model with all characteristics goes from 0.24 (t-statistic=2.01) to 0.16 (t-statistic=1.45) with a 10% outlier cutoff. There isa larger change for the MOM beta premium in the six-factor model, but none of the estimates is statistically significant.

    We have also conducted the analysis withoutincluding the correction, described in the appendix, for time-varying characteristics. Whilethe tenor of the results is unchanged, the impact on the magnitude of return premia is occasionally non-trivial (around 30% up or down).

    Finally, a conditional time-series regression framework for estimating betas with monthly returns has also been explored. Here, each individual stock beta is allowed to vary as a linear function (for simplicity) of the corresponding characteristic and each stock alpha is a linear function of all the characteristics, similar to the approach in Shanken (1990). Thus, the beta on SMB depends on size, the beta on HML depends on book-to-market, etc. Details are provided in Appendix C. This approach is appealing (in principle), since it directly addresses the possibility that, with betas assumedto be constant, the appearance of significant pricing ofa characteristic such as size may actually be a reflection of the premium for a time-varying SMB beta.In practice, however, we encountered the not-positive definite problem withgreater frequency and found no evidence of beta pricing other than a t-statistic of 2.0 on the RMW beta in the six-factor specification.Again, characteristics dominate.

    V. Time-Varying Premia

    In this section, we consider the possibility thatthe expected return premia for loadings or cross-sectional characteristics are time varying and we examine the impact that this has on our measures of the relative contributions to cross-sectional expected-return variation.Following Ferson and Harvey (1991), we estimate changing premia via time-series regressions of the monthly CSR estimates on a set of predictive variables. The idea is that the premium estimate for

    20See related work by Ferson and Harvey (1998), Lewellen (1999), and Avramov and Chordia (2006)
    21Concerned about the possibility of noise related to the large number of parameters that must be estimated in these time-series regressions for individual stocks, we also tried zeroing-out estimates of the interaction terms with t-statistics less than one. This made little difference in the results.
    22Gagliardinia, Ossola, and Scaillet (2011) also consider time-varying premia in large cross sections.

    a given month is equal to the true conditional premium plus noise. Therefore, regressing that series on relevant variables known at the beginning of each month identifies the expected component.


    Amihud, Yakov, and Clifford M. Hurvich, 2004, “Predictive Regression: A Reduced-Bias Estimation Method,”

    Journal of Financial and Quantitative Analysis 39, 813–841.

    Amihud, Yakov, Clifford M. Hurvich, and Yi Wang, 2008, “Multiple-Predictor Regressions: Hypothesis Testing,” Review of Financial Studies 22, 413–434.

    Ang, Andrew, Jun Liu, and Krista Schwarz, 2010, “Using Stocks or Portfolios in Tests of Factor Models,” Working paper, Columbia University. Avramov, Doron, and Tarun Chordia, 2006, “A sset Pricing Models and Financial Market Anomalies,” Review of Financial Studies 19, 1001–1040.

    Black, Fischer, Michael C. Jensen, and Myron Scholes, 1972, “The Capital Asset Pricing Model: Some Empirical Tests,” in M. C. Jensen, ed., Studies in the Theory of Capital Markets , pp.

    79–121. (Praeger, New York). Brennan, Michael, Tarun Chordia, and Avan idhar Subrahmanyam, 1998, “Alternative Factor Specifications, Security Characteristics and the Cross-Section of Expected Stock Returns,” Journal of Financial Economics 49, 345–373.

    Boudoukh, Jacob, Roni Michaely, Matthew Richardson, and Michae l R. Roberts, 2007, “On the Importance of Measuring Payout Yield: Imp lications for Empirical Asset Pricing,” Journal of Finance 62, 877–915.

    Campbell, John Y., 1991, “A Varian ce Decomposition for Stock Returns,” Economic Journal 101, 157–179.

    Carhart, Mark, 1997, “On Persistenc e in Mutual Fund Performance,” Journal of Finance 52, 57– 82.

    Cooper, Michael J., Huseyin Gulen, and Michae l J. Schill, 2008, “Asset Growth and the Cross- Section of Stock Returns,” Journal of Finance 63, 1609–1651.

    Daniel, Kent, and Sheridan Titman, 1997, “Evide nce on the Characteristic s of Cross-Sectional Variation in Common Stock Returns,” Journal of Finance 52, 1–33.

    Davis, James, Eugene F. Fama, and Kenneth R. French, 2000, “Characteristics, Covariances, and Average Returns: 1929-1997,” Journal of Finance 55, 389–406.

    Dimson, E., 1979, “Risk Measurement when Shar es are Subject to In frequent Trading,” Journal of Financial Economics 7, 197–226.

    Efron, Bradley, 1987, “Better Bo otstrap Confidence Intervals,” Journal of the American Statistical Association 82, 171–185.

    Efron, Bradley and Robert Tibshirani, 1993, An Introduction to the Bootstrap , Chapman & Hall.

    Fama, Eugene F., and Kenneth R. French, 1989, “Business Conditions and Expected Returns on Stocks and Bonds,” Journal of Financial Economics 25, 23–49.

    Fama, Eugene F., and Kenneth R. French, 1992, “The Cross-Section of Expected Stock Returns,” Journal of Finance 47, 427–465.

    Fama, Eugene F., and Kenneth R. French, 1993, “Common Risk Factors in the Returns on Stocks and Bonds,” Journal of Financial Economics 33, 3–56.

    Fama, Eugene F., and Kenneth R. French, 2008, “Dissecting Anomalies,” Journal of Finance 63, 1653–1678.

    Fama, Eugene F., and Kenneth R. French, 2012, “Size, Value, and Momentum in International Stock Returns,” Journal of Financial Economics 105, 457–472 .

    Fama, Eugene F., and Kenneth R. French, 2014, “A Five-Factor Asset Pr icing Model,” Working paper .

    Fama, Eugene F., and James D. MacBeth, 1973, “Risk, Return and Equilibrium: Empirical Tests,” Journal of Political Economy 81, 607–636.

    Ferson, Wayne E., and Campbell R. Harvey , 1991, “The Variation of Economic Risk Premiums,” Journal of Political Economy 99, 385–415.

    Ferson, Wayne E., and Campbell R. Harvey, 1998, “Fundamental Determinants of National Equity Market Returns: A Perspective on Conditional Asset Pricing,” Journal of Banking and Finance 21, 1625–1665.

    Ferson, Wayne E., and Campbell R. Harvey, 1999, “Conditioning Variables and the Cross- Section of Stock Returns,” Journal of Finance 54, 1325–1360.

    Frazzini, Andrea, and Lasse Heje Pedersen, 2011, “Betting Against Beta,” Working paper, New York University.

    Gagliardinia, Patrick, Elisa Ossola, and Olivier Scaillet, 2011, “Time-Va rying Risk Premium in Large Cross-Sectional Equity Datasets,” Working paper, Swiss Finance Institute. Haugen, Robert A., and Nardin L. Baker, 1996, “C ommonality in the Determinants of Expected Stock Returns,” Journal of Financial Economics 41, 401–439.

    Hou, Kewei, Chen Xue, and Lu Zhang, 2014, Di gesting anomalies: An investment approach, Review of Financial Studies.

    Jagannathan, Ravi, and Zhenyu Wang, 1998, “An Asymptotic Theory for Estimating Beta- Pricing Models Using Cross-Sectional Regressions,” Journal of Finance 53, 1285–1309.

    Jagannathan, Ravi, and Zhenyu Wang, 2002, “Empir ical Evaluation of A sset-Pricing Models: A Comparison of the SDF and Beta Methods,” Journal of Finance 57, 2337–2367.

    Jegadeesh, Narasimhan, and Sheridan Titman, 1993, “Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency,” Journal of Finance 48, 65–92.

    Kan, Raymond, Cesare Robotti, and Jay Shanken, 2013, “Pricing Model Performance and the Two-Pass Cross-Sectional Regression Methodology,” Journal of Finance 68, 2617–2649.

    Kim, Dongcheol, 1995, “The Errors in the Variables Problem in the Cross-Section of Expected Stock Returns,” Journal of Finance 50, 1605–1634.

    Lewellen, Jonathan W., 1999, “The Time-Serie s Relations Among Expected Return, Risk, and Book-to-Market.,” Journal of Financial Economics 54, 5–43.

    Lewellen, Jonathan W., Jay Shanken, and Stefan Nagel, 2010, “A Skeptical Appraisal of Asset Pricing Tests,” Journal of Financial Economics 96, 175–194.

    Lintner, John, 1965, “Security Prices, Risk and Maximal Gains from Diversification,” Journal of Finance 20, 587–616.

    Litzenberger, Robert H., and Kr ishna Ramaswamy, 1979, “The E ffect of Personal Taxes. and Dividends on Capital Asset Prices : Theory and Empirical Evidence,” Journal of Financial Economics 7, 163–196.

    Liu, Laura, 2009, Toni Whited, and Lu Zhang, “Investment-Based Expected Stock Returns, Journal of Political Economy 117, 1105–1139. Lo, Andrew, and A. Craig MacKinlay, 1990, “Data- Snooping Biases in Test s of Financial Asset Pricing Models,” Review of Financial Studies 3, 431–468.

    MacKinlay, A. Craig, 1995, “M ultifactor Models Do Not Explai n Deviations from the CAPM,” Journal of Financial Economics 38, 3–28.

    Mossin, Jan, 1966, “Equilibrium in a Capital Asset Market,” Econometrica 34, 768–783.

    Roll, Richard, 1977, “A Critique of the Asset Pricing Theory’s Tests Part I: On Past and Potential Testability of the Theory,” Journal of Financial Economics 4, 129–176.

    Sharpe, William, 1964, “Capital Asset Prices: A Theory of Market Equilibrium Under Conditions of Risk,” Journal of Finance 19, 425–442.

    Shanken, Jay, 1990, “Intertemporal Asset Pricing: An Empiri cal Investigation,” Journal of Econometrics 45, 99–120. Shanken, Jay, 1992, “On the Estima tion of Beta-Pricing Models,” Review of Financial Studies 5, 1–33.

    Shanken, Jay, and Guofu Zhou, 2007, “Estimati ng and Testing Beta-Prici ng Models: Alternative Methods and Their Performance in Simulations,” Journal of Financial Economics 84, 40– 86.

    Stambaugh, Robert F., 1999, “Predictive Regressions,” Journal of Financial Economics 54 375– 421.

    Titman, Sheridan, John K.C. Wei, and Fei xue Xie, 2004, “Capital Investments and Stock Returns,” Journal of Financial and Quantitative Analysis 39, 677–700.

    White, Halbert, 1980, “A Heteroskedasticity-C onsistent Covariance Matrix Estimator and a Direct Test for Heteroskedasticity,” Econometrica 48, 817–838























  4. Kenneth D. West Paper

    The Equilibrium Real Funds Rate: Past, Present and

    James D. Hamilton
    University of California at San Diego and NBER

    Ethan S. Harris
    Bank of America Merrill Lynch

    Jan Hatzius
    Goldman Sachs

    Kenneth D. West
    University of Wisconsin and NBER

    February 2015
    Revised August 2015

    This paper was written for the U.S. Monetary Policy Forum, New York City, February 27, 2015. We thank Jari Stehn and David Mericle for extensive help wit h the modeling work in Section 6. We also thank Chris Mischaikow, Alex Verbny, Alex Lin and Lisa Berli n for assistance with data and charts and for helpful comments and discussions. We also benefited fro m comments on an earlier draft of this paper by Mike Feroli, Peter Hooper, Anil Kashyap, Rick Mish kin, Kim Schoenholtz, and Amir Sufi. West thanks the National Science Foundation for financial suppor t.


    We examine the behavior, determinants, and implicati ons of the equilibrium level of the real federal funds rate, defined as the rate consistent wit h full employment and stable inflation in the medium term. We draw three main conclusions. First, the uncertainty around the equilibrium rate is large, and its relationship with trend GDP growth muc h more tenuous than widely believed. Our narrative and econometric analysis using cross-countr y data and going back to the 19th Century supports a wide range of plausible central estimates fo r the current level of the equilibrium rate, from a little over 0% to the pre-crisis consensus of 2%. Seco nd, despite this uncertainty, we are skeptical of the “secular stagnation” view that the equilibrium rate w ill remain near zero for many years to come. The evidence for secular stagnation before the 2008 cris is is weak, and the disappointing post-2008 recovery is better explained by protracted but ultimately tem porary headwinds from the housing supply overhang, household and bank deleveraging, and fiscal retrenchment. Once these headwinds had abated by early 2014, US growth did in fact acceler ate to a pace well above potential. Third, the uncertainty around the equilibrium rate implies that a monetary policy rule with more inertia than implied by standard versions of the Taylor rule coul d be associated with smaller deviations of output and inflation from the Fed’s objectives. Our simula tions using the Fed staff’s FRB/US model show that explicit recognition of this uncertainty results in a later but steeper normalization path for the funds rate compared with the median “dot” in the FOMC’s Summary of Economic Projections.

    1. Introduction

    What is the steady-state value of the real federal funds rate? Is there a new neutral, with a low equilibrium value for the foreseeable future? By the beginning of 2015, a consensus seemed to be buil ding that the answer to the second question is yes. Starting in 2012 FOMC members have been releasing their own estimates of the “longer run” nominal rate in the now somewhat infamous “dot p lot.” As Exhibit 1.1 shows, the longer run projection for PCE inflation has remained steady at 2.0%, but longer run projections for both the GDP and the nominal funds rate projections have dropped 2 5 bp. The implied equilibrium real rate has fallen from 2.0% to 1.75% and the current range among memb ers extends from 1.25 to 2.25%. Indeed, going back to January 2012, the first FOMC projections for the longer run funds rate had a median of 4.25%, suggesting an equilibrium real rate of 2.25%. Foreca sters at the CBO, OMB, Social Security Administration and other longer term official forec asts show a similar cut in the assumed equilibrium rate, typically from 2% to 1.5%.

    The consensus outside official circles points to an ev en lower equilibrium rate. A hot topic of discussion in the past year or so is whether the U.S. ha s drifted into “secular stagnation,” a period of chronically low equilibrium rates due to a persisten t weak demand for capital, rising propensity to save and lower trend growth in the economy (see Summers ( 2013b,2014)). A similar view holds that there is a “new neutral” for the funds rate of close to zero in real terms (see McCulley (2003) and Clarida (2014)). The markets seem to agree. As of February 20 15, the bond market was pricing in a peak nominal funds rate of less than 2½% (see Misra (2015)).

    The view that the equilibrium rate is related to tr end growth is long standing. For example, in Taylor’s (1993) seminal paper the equilibrium rate-t he real funds rate consistent with full employment and stable inflation-was assumed to be 2%. Why 2%? Bec ause it was “close to the assumed steady state growth rate of 2.2%” which, as Taylor noted at the time, was the average growth rate from 1984:1 to 1992:3. Perhaps the best known paper to formally estimate a time-varying equilibrium rate is Laubach and Williams (2003), which makes trend grow th the central determinant of the equilibrium rate.

    A tight link between the equilibrium rate and grow th is common in theoretical models. The Ramsey model relates the safe real rate to a represen tative consumer’s discount factor and expected consumption growth. So, too, does the baseline New Ke ynesian model, whose generalization is central to much policy and academic work. Thus these famili ar models tie the equilibrium rate to the trend rate of growth in consumption and thus the economy. In th ose models, shifts in trend growth will shift the equilibrium rate. In more elaborate models, shifts in the level of uncertainty or other model forces can also shift the equilibrium rate. Empirical estimate s of the New Keynesian models such as Barsky et al. (2014) and Curdia et al. (2014) find considerable va riation in the natural rate of interest.

    In other words, the equilibrium rate may be time var ying. Such time variation is very important for much of the discussion of current monetary policy .

    In this paper, we address the question of a “new neut ral” by examining the experience from a large number of countries, though focusing on the U.S . In Section 2 we describe the data and procedures that we will use to construct the ex-ante real rates used in our analysis. These go back as far as two centuries for some countries, and also include more detailed data on the more recent experience of OECD economies. We also note the strategy we oft en use to make empirical statements about the equilibrium rate: for the most part we will look to averages or moving averages of our measures of real rates; at no point will we estimate a structural model .

    Section 3 summarizes and interprets some of the exist ing theoretical and empirical work and highlights the theoretical basis for anticipating a r elation between the equilibrium real rate and the trend growth rate. In this and the next section, we look to moving averages as (noisy) measures of the equilibrium rate and the trend growth rate. Using both long time-series observations for the United States as well as the experience across OECD countries s ince 1970, we investigate the relation between safe real rates and trend output growth. We uncover some evidence that higher trend growth rates are associated with higher average real rates. However, that finding is sensitive to the particular sample of data that is used. And even for the samples with a p ositive relation, the correlation between growth and average rates is modest. We conclude that fact ors in addition to changes in the trend growth rate are central to explaining why the equilibrium real rate changes over time.

    In Section 4 we provide a narrative history of deter minants of the real rate in the U.S. trying to identify the main factors that may have moved the e quilibrium rate over time. We conclude that changes over time in personal discount rates, financia l regulation, trends in inflation, bubbles and cyclical headwinds have had important effects on the real rate observed on average over any given decade. We discuss the secular stagnation hypothesis i n detail. On balance, we find it unpersuasive, arguing that it probably confuses a delayed recovery with chronically weak aggregate demand. Our analysis suggests that the current cycle could be simi lar to the last two, with a delayed “normalization” of both the economy and the funds rate. Our narrat ive approach suggests the equilibrium rate may have fallen, but probably only slightly. Presumpt ively lower trend growth implies an equilibrium rate below the 2% average that has recently prevailed, pe rhaps somewhere in the 1% to 2% range.

    In Section 5 we perform some statistical analysis of t he long-run U.S. data and find, consistent with our narrative history as well as with empirical r esults found by other researchers in postwar datasets, that we can reject the hypothesis that the r eal interest rate converges over time to some fixed constant. We do find a relation that appears to be s table. The U.S. real rate is cointegrated with a measure that is similar to the median of a 30-year-av erage of real rates around the world. When the U.S. rate is below that long-run world rate (as it i s as of the beginning of 2015), we could have some confidence that the U.S. rate is going to rise, cons istent with the conclusion from our narrative analysi s in Section 4. The model forecasts the U.S. and worl d long-run real rate settling down at a value aroun d a half a percent within about three years. However, because the world rate itself is also nonstationary with no clear tendency to revert to a fixed mean, t he uncertainty associated with this forecast grows larger the farther we try to look into the future.

    Indeed, the confidence interval two years ahead is wi de, from 1 to 2 percentage points wide depending how far out one forecasts. This confiden ce interval only partially overlaps with Section 4’s narrative range of 1%-2%. Both ranges include the F OMC forecast implied by the numbers in Exhibit 1.1. We do not attempt to formally reconcile our two ran ges. Rather, we conclude that the U.S. real rate wi ll rise but that it is very hard for anyone to predict what the average value might turn out to be over t he next decade.

    More generally, the picture that emerges from our an alysis is that the determinants of the equilibrium rate are manifold and time varying. We are skeptical of analysis that puts growth of actual or potential output at the center of real interest r ate determination. The link with growth is weak. Historically, that link seems to have been buried by effects from factors listed above such as regulation and bubbles. We conclude from both formal and descr iptive analysis that reasonable forecasts for the equilibrium rate will come with large confidence in tervals.

    We close the paper in Section 6 by considering the i mplications of uncertainty about the equilibrium rate for the conduct of monetary policy . Orphanides and Williams (2002, 2006, 2007) have noted that if the Fed does not have a good estimate o f what the equilibrium real rate should be, it may be better able to achieve its objectives by putting m ore inertia into its decisions than otherwise. We us e simulations of the FRB/US model to gauge the relevanc e of this concern in the current setting. We evaluate a range of policies using an objective func tion that has often been applied for this kind of analysis, and consider how greater uncertainty about the equilibrium rate affects policy performance. Our results suggest that relative to the “shallow glid e path” for the funds rate that has featured prominently in recent Fed communications, when there is greater uncertainty about the equilibrium rate, a policy of raising rates later but-provided th e recovery does gather pace and inflation picks up- somewhat more steeply, may deliver a higher value of the objective function.

    To conclude, the evidence suggests to us that the secul ar stagnationists are overly pessimistic. We think the long-run equilibrium U.S. real interes t rate remains significantly positive, and forecasts t hat the real rate will remain stuck at or below zero for the next decade appear unwarranted. But we find little basis in the data for stating with confidence exactly what the value of the equilibrium real rate is going to be. In this respect our conclusion shares s ome common ground with the stagnationists. When the equilibrium real rate is not known, a policy of initially raising rates more slowly achieves a high er value for the objective function in our simulations compared to a policy that incorrectly assumes that the equilibrium real rate is known with certainty.

    2. The real interest rate across countries and acro ss time

    Our focus is on the behavior of the real interest rat e, defined as the nominal short-term policy rate minus expected inflation. The latter is of cour se not measured directly, and we follow the common approach in the literature of inferring expected in flation from the forecast of an autoregressive model f it to inflation. However, we differ from most previous studies in that we allow the coefficients of our inflation-forecasting relations to vary over time. W e will be making use of both a very long annual data set going back up to two centuries as well as a quarte rly data set available for more recent data. The countries we will be examining are listed in Exhibit 2.1. In this section we describe these data and our estimates of real interest rates.

    2A. A very long-run annual data set

    Our long-run analysis is based on annual data going as far back as 1800 for 17 different countries. Where available we used the discount rat e set by the central bank as of the end of each year. For the Bank of England this gives us a series going all the way back to 1801, while for the U.S. we spliced together values for commercial paper rates ov er 1857-1913, the Federal Reserve discount rate over 1914-1953, and the average fed funds rate durin g the last month of the year from 1954 to present.1 Our interest rate series for these two countries are p lotted in the top row of Exhibit 2.2 and for 15 oth er countries in the panels of Exhibit 2.3.2 The U.S. nominal rate shows a broad tendency to de cline through World War II, rise sharply until 1980, and de cline again since. The same broad trends are also seen in most other countries. However, there are als o dramatic differences across countries as well, such as the sharp spike in rates in Finland and Germany following World War I.

    We also assembled estimates of the overall price level for each country. For the U.S., we felt the best measure for recent data is the GDP deflator which is available since 1929. We used an estimate of consumer prices for earlier U.S. data and all other c ountries. The annual inflation rates are plotted in the second row of Exhibit 2.2 for the U.S. and U.K. and for 15 other countries in the panels of Exhibit 2.4 . There is no clear trend in inflation for any countr y prior to World War I, suggesting that the downward trend in nominal rates prior to that should be interp reted as a downward trend in the real rate. Inflati on rose sharply in most countries after both world wars, w ith hyperinflations in Germany and Finland following World War I and Japan and Italy after Wor ld War II. But the postwar spike in inflation was in every case much bigger than the rise in nominal inter est rates.

    How much of the variation in inflation would have b een reasonable to anticipate ex ante? Barsky (1987) argued that U.S. inflation was much les s predictable in the 19th century than it became later in the 20th century. Consider for example using a first-order a utoregression to predict the inflation rate in country n for year t :









    Dennis (2009, equations (6), (7), (11) and (12)) supplies the first order analogues to (3.3) when utility is (a) of the form (3.11), or (b) when habitis multiplicative rather than additive. It follows from Dennis’s expressions that neither internal nor externalhabit substantially affects the mean level of the safe rate when parameters are varied within the plausible range. Specifically, for additive habit, suchas in (3.11) above, it follows analytically from Dennis’s (11) and (12) that variation in habit has no effect on the mean safe rate. For multiplicative habit we have solved numerically for a range of plausible parameters and find habit has little effect on the mean rate. (Dennis’s expressions are log linearized around a zero growth steady state. We have derived the log linearization in the presence of nonzero growth in one case (additive external habit), and the conclusion still holds.)

    Campbell and Cochrane (1999) let conditional secondmoments vary over time. They assume that the conditional variance of what they call “surplus consumption” rises as consumption Ct approaches habit Xt. They parameterize this in a way that delivers an equilibrium real rate that is indeed plausibly low on average. The model, however, implies counterfactual relations between nominal and real rates (Canzoneri et al. (2007)).

    Hence our review of existing literature leads us to conclude that it is unlikely to be productive to focus on consumption when modeling the real rate, despite the strong theoretical presumption of a link between consumption growth and the real rate. The remaining parts of this section focus on GDP growth instead.

    3C. Output growth and the real rate in the U.S.

    There are theoretical reasons to expect a long-run relation between the real rate and GDP growth. In a model with balanced growth, consumption will, in the long run, grow at the same rate as output and potential output. Thus the combination of the intertemporal condition (3.4) and balanced growth means that over long periods of time, the average short real rate will be higher when the growth rate of output is higher and lower when output growth is lower. Perhaps there is a clear long-run relationship between output and the real rate, despite the weak evidence of such a relationship between consumption and the real rate. In this section we use our long-run U.S. dataset to investigate the correlation, over business cycles or over 10 yearaverages, between GDP growth and real rates. Our focus is on the sign of the correlation between average GDP growth and average real rates. We do not attempt to rationalize or interpret magnitudes. We generally refer to “average real rate” rather than equilibrium real rate. But of course our view is that we are taking averages over a long enough period that the average rate will closely track the equilibrium rate.

    Real rate data were described in Section 2. We nowdescribe our output data. Our U.S. GDP data runs from 1869 to the present. Balke and Gordon (1989) is the source for 1869-1929, FRED the source for 1929-present. Quarterly dates of business cycle peaks are from NBER. When we analyze annual data, quarterly turning points given by NBER were assigned to calendar years using Zarnowitz (1997, pp732-33). Zarnowitz’s work precedes the 2001 and 2007 peaks so we assigned those annual dates ourselves. When, for robustness, we briefly experiment with potential output instead of GDP, the CBO is our source.

    As just noted, we focus on the sign of the correlationbetween average GDP growth and average real rates. We find that this sign is sensitive to sample, changing sign when one or two data points are removed. We did not decide ex-ante which data points to remove. Rather, we inspected plots presented below and noted outliers whose removal might change the sign of the correlation. Ex-post, one might be able to present arguments for focusingon samples that yield a positive correlation, and thus are consistent with the positive relation suggestedby theory. But one who does not come to the data with a prior of such a relation could instead conclude that there is little evidence of a positiverelation.

    Peak to peak results

    Peak to peak results are in Exhibits 3.1-3.4. Our baseline set of data points for the peak to peak analysis are the 7 (quarterly) or 29 (annual) pairs of (GDP growth, r) averages presented in Exhibit 3.1. Here is an illustration of how we calculated peak topeak numbers. In our quarterly data, the last twopeaks are 2001:1 and 2007:4. Our 2007:4 values are 2.52 for GDP growth and 0.45 for the real interest rate. Here, 2.52 is average GDP growth over the 27 quarters from 2001:2 (that is, beginning with the quarter following the previous peak) through 2007:4,with 0.45 the corresponding value for the real rate.

    Let us begin with quarterly data (Exhibit 3.2, and rows (1)-(4) in Exhibit 3.4). A glance at the scatterplot Exhibit 3.2 suggests the following. First,the correlation between average GDP growth and the average real rate is negative, at -0.40 it so happens. (See line (1), column (6) of Exhibit 3.4. That exhibit reports this and other peak-to-peak correlations that we present here in the text.) Second, the negative correlation is driven by 1981:3. If we drop that observation-which, after all, reflects a cyclelasting barely more than a year (1980:2-1981:3), andis sometimes considered part of one long downturn (e.g., Mulligan (2009) and Angry Bear (2009), and our own Exhibit 4.9 below)-the correlationacross the remaining six peak to peak averages is indeed positive, at +0.32 (line (2) of Exhibit 3.4)). If we continue to omit the 1981:3 peak, but substitute CBO potential output for GDP (line (3)) or ex-post interest rates for our real rate series (line (4)), the correlation falls to -0.01 or 0.17.

    Of course, such sensitivity to sample or data may not surprising when there are only six or seven data points. But that sensitivity remains even when we turn to the much longer time series available with annual data, although the baseline correlation is now positive.

    The averages computed from annual data in columns (5) and (6) in Exhibit 3.1 are plotted in Exhibit 3.3. A glance at the scatterplot in that exhibit reveals the positive correlation noted in the previous paragraph, at 0.23 it so happens (line (5) of Exhibit 3.4). That correlation stays positive, with a value of 0.30 (line (6) of Exhibit 3.4) if we drop 1981, the peak found anomalous in the analysis of quarterly data.

    However, for annual data, one’s eyes are drawn not only to 1981 but also to points such as 1918, 1920, 1944 and 1948. One can guess that the correlation may be sensitive to those points. To illustrate: Let us restore 1981, but remove the postwar1920 and 1948 peaks, the correlation across the remaining 27 peak to peak averages is now negative, at -0.23 (line (7)). If we instead drop the three peaks that reflect the Great Depression or World War II, the correlation is again positive at 0.29 (line (8)).

    The remaining rows of Exhibit 3.4 indicate that the annual data give results congruent with the quarterly data when the sample period is restricted (lines (9) and (10)) and that the annual results are not sensitive to the measure or timing aggregate output (Romer (1989) and year ahead data in lines (11) and (12)).

    We defer interpretation of sensitivity until we havealso looked at backward moving averages of U.S. data, and cross-country results.


    We consider 40-quarter (quarterly data) or 10-year (annual data) backwards moving averages. Ten years is an arbitrary window intended to be longenough to average out transient factors and presumably will lead to reasonable alignment between output. Using annual data, we also experimented with a 20-year window, finding results similar to those about to be presented.

    Numerical values of correlations are given in column(6) of Exhibit 3.5, with scatterplots presented in Exhibits 3.6 and 3.7. In Exhibit 3.6, the fourth quarter of each year is labeled with the last two digits of the year. We see in Exhibit 3.6 that for quarterly data, the correlation between the 40-quarter averages is positive, at 0.39 it so happens (line (1) in Exhibit 3.5). This is consistent with the quarterly peak-to-peak correlation of 0.32 when 1981:3 is removed (line (2) of Exhibit 3.4)). The result is robust to use of ex-post real rates (line (3)). But, as is obvious from Exhibit 3.6, if we remove the post-2007 points, which trace a path to the southwest, the correlation becomes negative, at -0.19 (line (2)). We see in Exhibit 3.7 that for annual data, the correlation between 10-year averages is negative,at -0.25 it so happens (line (4) in Exhibit 3.5). The postwar sample yields a positive correlation (line (5)). Omitting 1930-1950, so that the Depression years fall out of the sample, turns the correlation positive (line (6)). The value of 0.31 is consistent with 0.29 figure in line (8) of peak-to-peak results in Exhibit 3.4, which also removed Depression and post-World War II years.

    3D. Cross-country results

    Our GDP data come from the OECD. The source data were real, quarterly and seasonally adjusted. Sample coverage is dictated by our real rate series that were described in Section 2. Our real rate series for all countries had a shorter span than our GDP data. Our longest sample runs from 1971:2-2014:2.

    We compute average values of GDP growth and of the real interest rate over samples of increasing size, beginning with roughly one decade (2004:1-2014:2, to be precise) and then move the start date backwards. The sample for averaging increases to approximately two (1994:1-2014:2), then three (1984:1-2014:2), and finally four (1971:2-2014:2) decades. Some countries drop out of the sample as the start of the period for averaging moves back from 2004 to 1971.

    Exhibit 3.8 presents the resulting values. Exhibit 3.9 presents scatterplots of the data in Exhibit 3.9. Note that the scale of the 2004:1-2014:2 scatterplot is a little different than that of the other three scatterplots.

    As suggested by the scatterplots and confirmed by the numbers presented in the “corr” row of Exhibit 3.8, the correlation between average GDP growth and average real rates is positive in all four samples, and especially so in the 20 year sample. However, the sign of the correlation is sensitive to inclusion of one or two data points. For example, in the 1984-2014 sample, if Australia is omitted, the correlation turns negative.

    3E. Summary and interpretation

    Both our U.S. and our international data yield a sign for the correlation between average GDP growth and the average real interest rate that is sensitive to sample, with correlations that are numerically small in almost all samples.However, the theoretical presumption that there is a link between aggregate growth and real rates is very strong. One could make an argument to pay more attention to the samples that yield a positive correlation-for example, dropping 1980-81 from the set of full U.S. expansions or dropping 1930-1950 from the 10-year U.S. averages-and deduce that there is modest evidence of a modestly positive relationship between the two. For our purposes, we do not need to finely dice the results to lean either towards or against such an argument. Rather, we have two conclusions. First, if, indeed, we are headed for stagnation for supply side reasons (Gordon (2012, 2014)), any such slowdown should not be counted on to translate to a lower equilibrium rate over periods as short as a cycle or two or a decade. Second, the relation between average output growth and average real rates is so noisy that other factors playa large, indeed dominant, role in determination ofaverage real rates. In the next section we take a narrative approach to sorting out some of these factors.

    4. A narrative interpretation of historical real rates

    Much of the recent discussion of the equilibrium real rate has relied on a framework similar to the simple one sketched in equation (3.5) above inwhich the major factor responsible for shifts in the IS curve is changes in the trend growth of the economy.Although this is a very common assumption, we found at best a weak link between trend growth and the equilibrium rate.

    More generally, theoretical models suggest trend growth is not the only factor that can shift the equilibrium rate. We noted above that the literature has considered varying the discount factor, the utility function and dropping the representative agent / complete markets paradigm. In connection with the last, we note that much research assumes that the interest rate that governs consumption decisions in equation (3.5) and its generalizations for other utility functions is the risk-free real rate. However, as noted for example by Wieland (2014), in an economy with financial frictions the rate at which households and firms borrow can differ substantially from the risk-free rate. The literature on the monetary transmission mechanism suggests the equilibrium real funds rate will also be sensitive to changes in the way monetary policy is transmitted through long term rates, credit availability, the exchange rate and other asset prices. The equilibrium rate will also be sensitive to sustained changes in regulatory or fiscal policy. Finally the typical models assume that changes in the trend inflation rate have no effect on the real interest rate, an assumption that again turns out to be hard to reconcile with the observed data.

    In this section we provide a narrative review of the history of the U.S. real interest rate to call attention to the important role of factors like the ones referenced in the preceding paragraph in determining changes in real rates over time. Since our focus is on the equilibrium rate we look at averages over various time periods, taking into account forces that may have shifted the equilibrium rate or caused the average to deviate from equilibrium at the time. Our ultimate goal is to understand

    6This is consistent with the formal econometric work of Clark and Kozicki (2005,p403), who conclude that the link between trend growth and the equilibrium real rate is “quantitatively weak.”

    whether similar forces are at play today. We take a particularly close look at one of the most popular narrative interpretations of recent developments. This is the view that the US economy is suffering from “secular stagnation”-persistent weak demand and a nearzero equilibrium rate. Our tentative conclusion from this exercise is that the equilibrium rate currently is between 1 and 2%, but there is considerable uncertainty about how quickly rates willreturn to equilibrium and the degree of likely overshooting at the end of the business cycle.

    In this analysis we will be referring to two different measures of the real rate. The “ex-ante real rate”” is the estimate of the ex-ante real rate developed in Section 2, which proxies inflation expectations using an autoregressive model for the GDPdeflator for data after 1930 or a CPI for data before 1930 that is estimated over rolling windows. The “static-expectations real rate” is the measure that people in the markets and the Fed look at most often, calculated as the nominal interest rate minus the change in the core PCE deflator over the previous 12 months. Exhibit 4.1 repeats Exhibit 2.7, with the static-expectations real rate added on. As the Exhibit shows, the two real rate series align very closely. Over the 1960 to 2014 period, the GDP-based ex-ante real rate and the PCE-based static-expectations real rate both average 2.01%.

    4A. The real interest rate before World War II

    Exhibit 4.2 reproduces our long history ex-ante realrate series for the United States from the lower left panel of Exhibit 2.2. The first thing that stands out in the real rate data is the notable downward shift in the real rate starting in the 1930s. U.S. real rates averaged 4.2% before World War Iand only 1.3% since World War II. We found a similar drop for virtually every other country we looked at.

    Three factors may account for the secular decline in real rates. First, in the earliest periods the short rate may have not been truly risk free. As Reinhart and Rogoff (2009) and others have documented, the period before World War II is laden with sovereign debt defaults. Almost all the defaults occurred when countries were in an emergingstage of development. In their data set, only Australia, New Zealand, Canada, Denmark, Thailand and the U.S. never had an external debt default. In the U.S. case, however, bouts of high inflation in the American Revolution and Civil War and the exit from the gold standard in 1933 had an effect similar to default.

    Second, before the Great Depression financial markets were much less regulated. Interest rates, rather than credit and capital constraints did the work of equilibrating supply and demand.

    Third, and perhaps the most important explanation in the economic history literature is low life expectancy. From 1850 to 2000 the average life expectancy for a 20 year old American male rose from 58 to 76.. Shorter life expectancies in the past created two kinds of risks. First, absent a strong bequest

    7 Source: http://mappinghistory.uoregon.edu/english/US/US39-01.html

    motive, a short life expectancy should mean a high time value of money. You can’t take it with you. Second, shorter life expectancy increases the risk of nonpayment.

    Regardless of the cause of the shift, this suggests a gooddeal of caution in trying to extrapolate from these early years to the current economy.

    History lesson #1

    : The equilibrium rate is sensitive to time preference and perceptions about the riskiness of government debt.

    History lesson #2

    : Judging the equilibrium rate using long historical averages can be misleading.

    4B. Financial repression (1948-1980)

    Reinhart and Sbrancia (2015) define financial repression as a regulatory effort to manage sovereign debt burdens that may include “directed lending to government by captive domestic audiences (such as pension funds), explicit or implicit caps on interest rates, regulation of cross-border capital movements, and (generally) a tighter connection between government and banks.” The period immediately following World War II was one of financial repression in many countries, including the U.S. If there are limited savings vehicles outside of regulated institutions and if those institutions are encouraged to lend to the government, this can lower the cost of funding government debt and the equilibrium rate. As noted by Reinhart and Rogoff (2009, p. 106),

    During the post-World War II era, many governments repressed their financial markets, with low ceilings on deposit rates and high requirements for bank reserves, among other devices, such as directed credit and minimum requirements for holdinggovernment debt in pension and commercial bank portfolios.)

    Not surprisingly, real policy rates were very low for most of this period. Before the Fed Treasury Accord of 1951, interest rates were capped at 3/8% for 90 day bills, 7/8 to 1 ¼% for 12-month certificates of indebtedness and 2 ½% for Treasury bonds (Exhibit 4.3). The caps were maintained despite wild swings in inflation to as high as 25%. In the 1930s and 1940s the Fed also frequently used changes in reserve requirements as an instrument of monetary control.

    The Accord gave the Fed the freedom to raise interest rates, but a variety of interest rate caps and other restrictions continued to hold down the equilibrium rate into the 1970s. When monetary policy was loose, rates fell; but when monetary policytightened, a variety of ceilings became binding and the main restraint from monetary policy came from the quantity of credit rather than the price of credit. As Exhibit 4.4 shows, three-month T-bill rates rose above the Regulation Q deposit rate ceiling several times during this period. Indeed, many models of real activity at the time used dummy variables to capture a series of credit crunches during this period-in particular, 1966, and 1969-70. By the late

    8 Clark (2005) argued that these developments account for a decline in interest rates beginning with the industrial revolution.

    1970s the constraints had become less binding and interest rate ceilings were phased out from 1980 to 1986.

    History lesson #3

    : The equilibrium real rate is sensitive to the degree of financial constraint imposed by regulations and by the degree to which policy relies on quantity rather than price (interest rates) to manage aggregate demand.

    4C. The inflation boom and bust (1965-1998)

    The era of financial repression overlapped with the Great Inflation. Inflation was very low and stable in the early 1960s, but started to move higherin 1965. Exhibit 4.5 shows the history of headline and core PCE inflation. In 1966 the Fed tried to put on the brakes by hiking rates. This caused disintermediation out of the mortgage market and a collapse in the housing sector. The Fed then backed off, marking the beginning of a dramatic surge in inflation. From 1971 to 1977 the ex-ante real funds rate averaged just 0.3%, reflecting both persistently easy policy and a series of inflation surprises for investors.

    From 1980 to 1998 the inflation upcycle was completely reversed. PCE inflation fell back to 1%. Starting with Volcker the Fed created persistently high rates. During this period the “bond vigilantes” extracted their revenge, demanding persistently highreal returns. Survey measures of inflation expectations also showed a persistent upward bias. Over the period the ex-ante real funds rate averaged 4.1%. With the Fed pushing inflation lower,interest rates probably were above their long-run equilibrium level during this period.

    Both inflation and real interest rates have been verylow over the past two business cycles. Since 1998, year-over-year core PCE inflation has fluctuated in a narrow band of 1% to 2.4%. Consumer surveys of inflation expectations dropped to about 3%in the mid-1990s and have stayed there ever since (Exhibit 4.6). Surveys of economists, such as the Survey of Professional Forecasters have settled in right on top of the Fed’s 2% PCE inflation target (also Exhibit 4.6).

    History lesson #4

    : Trends up or down in inflation can influence the real interest rate for prolonged periods. Real rate averages that do not take this into account are poor proxies for the equilibrium rate.

    4D. Real rates in delayed recoveries (1991-2007)

    Both the 1991-2001 and 2002-2007 cycles differed significantly from past recoveries. Historically, the economy comes roaring out of a recession and the bigger the recession the faster the bounce back. Exhibit 4.7 shows a simple “spider” chartof payroll employment indexed to
    the trough of the last 7 business cycles.Note the slow initial rebound in 1991, 2002 (and in the current cycle). This initially weak recovery prompted considerable speculation about permanent damage to growth and permanently lower rates. In 1991 Greenspan argued that heavy debt, bad loans, and lending caution by

    9For expository purposes we have excluded the brief1980 cycle. Also note that earlier cycles look similar to the 1970s and 1980s cycles.

    banks were creating “50 mile-per-hour” headwinds for the economy. But by 1993 Greenspan was changing his tune: “The 50-miles-per-hour headwinds a re probably down to 30 miles per hour.” The same thing happened in the 2001-2007 cycle: fear of terrorism, corporate governance scandals, the tech overhang and fear of war in the Middle East all app eared to weigh on growth. When the Iraq War ended without a major oil shock or terrorist event, GDP gro wth surged at a 5.8% annual rate in the second half of 2003 and by 2005 the unemployment rate had dropp ed below 5%.

    These delayed recoveries had a major impact on funds rate expectations. When the Fed first started hiking rates in February 1994 the market loo ked for the funds rate to rise about 100 bp over the next 24 months; in the event, the Fed hiked the fund s rate by 300 bp in 13 months (Exhibit 4.8) The ex- ante real rate averaged 2.9% over the full business cy cle, but at 4.7% at the end of the cycle as the Fed fought inflation (Exhibit 4.9). In the next cycle, when the Fed finally started to hike in June 2004, m any analysts thought a normal hiking cycle was not possible . 11 When the Fed started to move, the markets were pricing in 170 bp in rate hikes over the next 2 4 months; in the event, the Fed hiked by 425 bp over a 24 month period. The real funds rate averaged just 0.5% over the full business cycle, but again peaked at a much higher 3.1%. The PCE-based measure yields n umbers that are about two tenths higher than these averages of the ex-ante real rate.

    History lesson #5:

    Persistent headwinds can create a persistently low real rate, but when headwinds abate rates have tended to rise back to t heir historic average or higher.

    4E. Real rates, gluts, conundrums and shortages (20 01-2007)

    While for most of this paper we have ignored the broa der global backdrop, a big story in the 2000 cycle was the unusual behavior of bond yields glo bally. From 2004 to 2006 the Fed hiked the funds rate by 425 bp and yet 10-year yields only rose about 40 bps. Greenspan (2005) called this the “bond conundrum,” pointing to an even bigger drop in yiel ds outside the US, pension demand as population ages, reserve accumulation by EM central banks, and perhaps most important, a growing pool of global savings. Bernanke (2005) described this as a “glut of g lobal savings,” noting that after a series of crises many emerging market economies were building up massi ve currency reserves. He also pointed to rising savings by aging populations in Germany, Japan and oth er developed economies and to the attractiveness of US capital markets. Caballero (2006 ) and others make a related argument that there is a “safe asset shortage” caused by a rapid growth in inco mes and savings in emerging markets and a shortage of safe local saving vehicles due to undevelop ed capital markets.

    It is not entirely clear whether the “glut”, “conun drum,” or “shortage” lowers or raises the equilibrium real funds rate. All else equal, lower U S bond yields and compressed term premia stimulate the economy, forcing the Fed to hike more to achiev e the same degree of financial restraint. However, not all else is equal. For example, central bank buyi ng of US treasuries presumably put some upward

    11 For example McCulley (2003) argued that the equilib rium real funds rate was close to zero. He argued t hat “overnight money, carrying zero price risk, zero cred it risk and zero liquidity risk should not yield a real after-tax return.”

    pressure on the dollar, contributing to the sharp wide ning of the trade deficit. Indeed, as Exhibit 4.10 shows, from the peak of the previous business cycle (200 0:1) to the peak of the construction boom (2005:3), housing as a share of GDP rose by 2pp and ne t exports as a share of GDP fell by 2 pp. On net, the saving glut may have not changed overall financi al conditions, but instead made them imbalanced, contributing to both a surging trade deficit and a h ousing bubble. The upshot of all of this is that the glut did not prevent significant Fed rate hikes. As we n oted above, the static-expectations real rate peaked at 3.3% in 2006.

    History lesson #6:

    The global saving glut probably distorted overall US financial conditions, but did not have a clear impact on the equilibrium real funds rate.

    4F. Secular stagnation and the equilibrium rate (19 82-? )

    Our narrative approach to the history of the equilib rium rate is particularly useful in addressing a competing “narrative theory” of the last several bu siness cycles: the idea that the economy suffers from secular stagnation. The idea goes back to the 19 30s when Alvin Hansen asked whether the economy would ever be able to achieve satisfactory gr owth. He was concerned both about chronic deficient demand and a lower trend growth in the ec onomy and hence a low equilibrium real rate.

    The secular stagnation hypothesis.

    Krugman, Dominguez, and Rogoff (1998) revived Han sen’s concerns, suggesting that when the equilibrium real interest rate is negative, an econo my could get stuck at suboptimal growth and deflation as a result of the zero lower bound on nomi nal interest rates. Summers (2013b) expressed the hypothesis this way:

    Suppose that the short-term real interest rate that wa s consistent with full employment had fallen to negative two or negative three percent in the middle of the last decade. Then … we may well need, in the years ahead, to think about h ow we manage an economy in which the zero nominal interest rate is a chronic and systemic i nhibitor of economy activity, holding our economies back below their potential.

    Summers (2014) suggested that secular stagnation in the U.S. goes back to the 1990s, arguing that the strong performance in the 1990s “was associated with a substantial stock market bubble.” Again in 2007 the economy did “achieve satisfactory levels of capac ity utilization and employment”, but this was due to the housing bubble and “an unsustainable upward mov ement in the share of GDP devoted to residential investment.” He queried “in the last 15 y ears: can we identify any sustained stretch during which the economy grew satisfactorily with conditio ns that were financially sustainable?” Finally Summers extended this argument to the rest of the indu strial world, pointing to even worse performance in Japan and Europe.

    12 Summers is basically restating the “serial bubbles ” view of recent business cycles popularized by Ste phen Roach and many others, See for example, http://delong.type pad.com/sdj/2005/06/stephen_roach_o.html

    Krugman (2013) also argued that bubbles have been nec essary to achieve economic growth:

    We now know that the economic expansion of 2003-2007 was driven by a bubble. You can say the same about the latter part of the 90s expansion; a nd you can in fact say the same about the later years of the Reagan expansion, which was drive n at that point by runaway thrift institutions and a large bubble in commercial real est ate….So how can you reconcile repeated bubbles with an economy showing no sign of inflation ary pressures? Summers’s answer is that we may be in an economy that needs bubbles just to ac hieve something near full employment – that in the absence of bubbles, the economy has a ne gative equilibrium rate of interest. And this hasn’t just been true since the 2008 financial c risis; it has arguably been true, although perhaps with increasing severity, since the 1980s.

    Were near zero rates and/or asset bubbles essential to a chieving full employment in the 1982, 1990 and 2000 business cycles? Is underlying demand so weak that it is impossible to create inflation pressure even with super easy policy? A close look at t hese cycles shows little support for either of these propositions.

    Unemployment, inflation, and the real interest rate over the last 3 cycles.

    The US economy has not been suffering chronic under-e mployment. The economy not only reached full employment in each of the last three bu siness cycles, it actually significantly overshot full employment. This is true whether one uses typical estim ates of the NAIRU from the CBO, IMF or OECD or if one takes an agnostic approach and simply use th e historic average unemployment rate (5.8% in the post-war period). For example using CBO estimates, the US overshot the NAIRU rate by between 0.6 to 1.1 pp in each cycle and these periods of tight la bor markets lasted between 8 and 18 quarters (Exhibit 4.11). CBO estimates of the output gap show si milar results: GDP was above potential in 1988- 1989, 1997-2001 and 2005-2006. Note that this success i n achieving a full recovery is not an artifact of assuming low potential growth or a high NAIRU: during this period CBO estimates of potential growth rose and the estimated NAIRU fell. These extended peri ods where aggregate demand exceeded aggregate supply are hardly a sign of secular stagnat ion.

    Exhibit 4.12 shows furthermore that each of the last t hree cycles ended with incipient inflation pressure. In the 1980s cycle, the Fed pushed inflation down to below 4%, but by 1988, it was trending up again. In the 1990s, inflation also picked up at the end of the business cycle, although core PCE inflatio n only briefly pierced 2%. Presumably, this was related to the unexpected surge in productivity during this period. On a 5-year basis, growth in nonfarm busine ss productivity peaked at 3% at the end of the 1990s expansion, up from just 2% over the previous 20 years or so. Core inflation was persistently above 2% in the second half of the 2000 expansion and headline i nflation was above 3% other than a brief interruption in 2006. This seems inconsistent with the idea that the Fed had trouble sustaining normal inflation.

    Of course, the rise in inflation at the end of recen t economic expansions has been milder than in the 1960s and 1970s. However, in our view, this is not a sign that the Fed cannot create inflation; instead, it shows that they have learned when to apply the brakes, gaining credibility along the way. The 1970s experience has taught the Fed about the risks o f trying to exploit the short-run Phillips Curve and the importance of finishing the job in eradicating u nwanted inflation. A good measure of their success in restoring credibility is that both survey and market m easures of inflation expectations have become very stable. In Exhibit 4.13, we show the standard 10-year inflation breakeven, along with a measure from the Federal Reserve Bank of Cleveland that attempts to remove term and risk premia. The recent weak response of inflation to tight labor markets probably also reflects the unexpected productivity boom in the 1990s; increased global integration, making the US sensitive to global as well as domestic slack; the weakening of union power and low minimum wages; and host of other factors. In our view these conventional arguments for a flatter Phillips Curve are more compelling than the secular stagnation thesis.

    History lesson #7:

    During the period of alleged secular stagnation, t he unemployment rate was below its postwar average and inflation pressures e merged at the end of each cycle.

    Over the 1982 to 2007 period as a whole the ex-ante real rate averaged 3.0% (and the static- expectations measure averaged 2.9%). This was above the 2.0% post-war average, but since the Fed was trying to lower inflation in the first half of this p eriod, we believe the average rate was higher than its equilibrium level during this period. The 1980s cycl e had the normal strong start and quick funds rate normalization. However, for both the 1990 and 2000 cycle, the economic recovery was initially weak and the funds rate was persistently low. As headwinds fa ded, however, eventually the funds rate surged above its long-run average. Looking at the individu al cycles, the economy reached full employment with an ex-ante real rate of 3.3, 4.0 and 0.25% respecti vely (again see Exhibit 4.9). In each cycle, the r eal rate eventually peaked well above its historic average (la st column of Exhibit 4.9).

    History lesson #8:

    During the first part of the period of alleged sec ular stagnation (1982-2007) the real rate averaged 3%, a percentage point highe r than its post-war average.

    The role of asset bubbles in the last three recover ies

    What about asset bubbles? Were they essential to achiev ing full employment and normal inflation? The evidence is mixed, but a close look a t the three cycles offers little support for the secula r stagnation thesis. As we will show, the timing of the alleged bubbles doesn’t really fit the stagnation story.

    1982-1990. Asset bubbles may have had some impact on the 1982-90 e conomic recovery, with a boom in commercial real estate and related easy len ding from savings and loans. However, the economy hit full employment in 1987 and stayed there even as the tax reform in 1986 had already undercut the real estate boom and even as the stock m arket crashed in 1987. Thus, while nonresidential investment did surge in the early 1980s, it collapsed a fter tax reform in 1986. As seen in Exhibit 4.14 over the course of the recovery structures investment plunged as a share of GDP. The Savings and Loan industry followed a similar pattern. The heyday of e asy S&L lending was in the early 1980s. From 1986 t o 1989 the Federal Savings and Loan Corporation (FSLIC ) had already closed or otherwise resolved 296 institutions. Then the Resolution Trust Corporation (RT C) took over and shuttered another 747 institutions. The boom and bust in these two sectors cau sed shifts in aggregate demand, but it is hard to see their role in achieving and maintaining a low un employment rate after 1986.

    History Lesson #9 : The economy reached full employment in the 1980s despite high real interest rates and retrenchment in the real estate and S&L i ndustry in the second half of the recovery.

    1990-2000. The asset bubble story is even less convincing in the 19 90s recovery. The NASDAQ started to disconnect from the economy and the rest of the stock market in late 1998 and surged out of control in 1999 (Exhibit 4.15). However, before the bubble started, the unemployment rate had already dropped to 4.7% in 1997, well below both its histori c average and CBO’s ex post estimate of full employment. Hence the NASDAQ bubble may have contri buted to the subsequent overheating at the end of the economic recovery but it is putting the c art before the horse to argue that it was necessary for achieving full employment.

    History Lesson #10

    : The NASDAQ bubble came after the economy reached full employment and therefore was not a precondition for achieving full employment.

    2000-2008. Of the three recent business cycles, the 2000 cycle provides the best support for the argument that monetary policy is only stimulative if it creates asset bubbles. Data from Core Logic shows national home prices rising very slowly in the e arly 1990s, but then accelerating to double digit rates and peaking in 2005. The Case-Shiller measure o f national home prices shows a slow acceleration in the early 1990s, and then an acceleration to dou ble digit rates, peaking in 2005. Bank of America Merrill Lynch’s model of the Case-Shiller data sugges ts home prices began to diverge from their fair value in 2001 (Exhibit 4.16). Lending standards eased during this period with a surge in exotic lending starting in the second half of 2004. Meanwhile, lever age ratios and off balance sheet asset expansion surged.

    Was the recovery in the economy unusually weak given the credit bubble during this period? Would the economy have reached full employment with out the bubble? Getting a definitive answer on this is difficult, but at a minimum it requires look ing not only at the biggest tailwind in this period- the housing bubble-but also the biggest headwinds-the sharp increase in the trade deficit and the relentless rise in energy prices. Here we compare the positives and the negatives using some simple metrics. Note that for each chart we draw a vertic al line in 2005 when the unemployment rate had dropped to 5%, the CBOs estimate of NAIRU.

    First, the boosts: easy credit stimulated a boom in b oth construction and consumer spending. As Exhibit 4.17 shows, residential investment has historic ally averaged 4.7% of GDP, with a typical peak of about 6%. However, in the 2000s cycle residential i nvestment rose from 4.9% at the end of the 2001 recession in 2001Q4 to 6.6% at the housing market pe ak in 2006Q1. This boom occurred despite weak

    13 Summers (2014) also argues that “fiscal policy was excessively expansive” during this period. Note, how ever, that official estimates of the cyclically adjusted budget deficit show fiscal policy tightening from 200 4 to 2007. For example, OECD estimates show cyclically adjusted ne t government borrowing falling from 6.1% of potential GDP in 2004 to 4.7% in 2007. Indeed, by this metric fiscal policy tightened in the second half of each of the last three cycles with a particular big tightening in the 1992-2 000 period.

    demographics: the peak in first home buying is in the 30 to 39 age range, but this group shrunk about 0.9% per year in the 2000 cycle. It therefore seems quite reasonable to attribute the gain mostly to easy credit, which would imply a boost of 1.7 percentage points directly through higher homebuilding, or 0.4 percentage point at an annual rate. However, it is worth noting that at the start of the Great Recession in 2007Q4 residential construction had already falle n back to just 4.8% of GDP.

    At the same time, surging home prices boosted consumer spending through both a classic wealth effect and a liquidity channel related to th e surge in “mortgage equity withdrawal” (MEW) illustrated in Exhibit 4.18 and discussed in Feroli et al. (2012). To get a sense of the magnitude of thes e effects, we go back to the analysis in Hatzius (2006) which presented a simple model of consumer spending with separate housing wealth and MEW effects . In this analysis, the coefficient on (housing) wealth was estimated at 3.4 cents/dollar and that on ” active MEW”-cash-out refinancing proceeds and home equity borrowing-at 62 cents/dollar; “passive M EW” i.e. home equity extracted in the housing turnover process was not significantly related to co nsumer spending. Using these estimates, the increases in the housing wealth/GDP ratio and active MEW from 2001Q4 to 2006Q1 added a total of 2.3% to the level of GDP, which implies a boost to gr owth of about 0.5 percentage point at an annual rate.

    Second, the drags: the increase in the trade deficit and rising energy prices were important counterweights.

    Regarding trade, the trade deficit increased by 2.4% of GDP from 2001Q4 to 2006Q1, subtracting 0.5 percentage point per year from growt h. In our view, much of this increase was due to two forces: the direct impact of the housing and cred it boom on import demand and the entry of a highly mercantilist China into the global economy po st WTO accession. In our view, both need to be taken into account when evaluating how quickly the economy “should” have grown during the housing and credit bubble.

    Regarding oil prices, we believe the price increase i n the 2000s mostly reflected a combination of constrained supply and surging demand from emergin g markets. Hence, from a US perspective, much of it was an exogenous supply shock. A simple app roach for estimating the size of the shock is to look at the “tax” on household incomes from energy pr ices rising faster than nonenergy prices. In Exhibit 4.19 we compare the growth in the overall PCE defla tor to the PCE excluding energy. Based on this metric, rising energy prices imposed a tax increase of about ½ percentage point of disposable income per year on the consumer between 2001Q4 and 2006Q1. Recognizing that consumption is about 70% of GDP and assuming a marginal propensity to spend of 70% , this number suggests a GDP hit of about ¼ percentage point per year over this period. After 2 006, the energy hit to GDP growth increased further as oil prices rose even faster through mid-2008.

    Putting the shock variables together, we estimate tha t rising home construction and the housing wealth/MEW effect were adding just under 1 percentag e point per year to growth from 2001Q4 to 2006Q1. Against this, the increase in the trade defi cit and the surge in energy prices were subtracting about ¾ percentage point. In other words, the negat ive forces probably canceled out most of the stimulative impact of the housing bubble. By the pea k of the business cycle, the winds had already shifted as construction and home prices started to slide and the energy tax surged. Nonetheless, the unemployment rate fell below NAIRU, bottoming at 4. 4%.

    Would the economy have achieved and sustained full em ployment in the absence of all of these shocks, positive and negative? It is impossible to do f ull justice to this period in a short narrative, but if we are right that a sizable portion of the obvious b ubble-induced boosts were canceled out by equally obvious drags, the fact that the unemployment rate fe ll below NAIRU despite a 3% real funds rate suggests that the answer may well be yes.

    History lesson #11

    : Taking into account the offsetting headwind from the rising trade deficit and higher oil prices as well as the tailwinds from the housing bubble, it is not clear whether the economy suffered secular stagnation in the 2000s.

    4G. Outlook for the current cycle

    With this historical narrative as our guide, what are the implications for the equilibrium rate today?

    First, the obvious: using historical averages from so me periods of history as a gauge of equilibrium today can be quite misleading. The whol e period before the Fed-Treasury Accord seems of very limited value. Real rates before WW I were chr onically higher, presumably reflecting higher risk premiums and discount rates. Real rates fluctuated wild ly during the Depression and war years. And the period of interest rate pegging is clearly not releva nt today. On a similar vein, clearly average real r ates during a period when inflation is trending in one di rection are a poor measure of the equilibrium rate.

    Second, changes in the monetary transmission mechani sm due to regulatory and developments are clearly very important to determining equilibri um. Before financial deregulation, credit crunches d id most of the “dirty work” in fighting overheating in the economy. This tended to cap the upside for real interest rates, lowering the average rate for the per iod. Today the long period of deregulation is over and regulatory limits are growing.

    Rather than dig into the deep weeds here, we would make the following observations. First, capital markets remain much less regulated than in the 1960s and 1970s. Today there is a big, active corporate debt market, global capital markets are wi de open and banks play a much smaller role in the financial system. Bank capital and liquidity require ments have gone up; but restrictions on banks do not approach historic levels. Two areas may face chronica lly tight credit: residential mortgage lending and small business lending. But even here the constraint i s tighter credit standards, not dramatic disintermediation episodes. Recall that even in the heavily regulated 1960s real rates averaged well above zero. For example, from 1960 to 1965, a perio d of stable 1% inflation, the real rate also average d about 1% (recall Exhibit 4. 1) for both ex-ante and static-expectations real rates.

    Third, as in the last business cycle, global forces seem to be having a big impact on the US bond market. The current negative real interest rates are a global phenomenon. Of the countries represented in Exhibit 2.1, 17 of the 20 estimated quarterly ex- ante real rates are negative as of the end of 2014, with 15 out of 17 the comparable figures for annual data. In the past 12 months US 10-year bond yields have plunged by more than 100 bp, despite the end of QE3, stable core inflation, the end of the Fed’s balance sheet expansion and a looming rate hike cycl e. It appears that a combination of weak global growth, falling core inflation (particularly in Eur ope) and expectations of further central bank balanc e sheet expansion is putting downward pressure on global rates. In the two years ahead, we expect the combined balance sheet of the “big four” central ban ks-the Fed, ECB, BOJ and BOE-to expand their balance sheets at almost double the pace of the last ye ar. As with the previous “glut” it is hard to know whether global developments are raising or lowering the equilibrium real funds rate.

    Fourth, our look back at the last two economic reco veries underscores the danger of mistaking short-run headwinds for permanent weakness. Recall th at one of the great dangers in formal models of the equilibrium rate is the “end point problem”-esti mates of time-varying parameters tend to be skewed by the most recent data. This problem is also cri tical for the narrative approach: simply put, it is a lot easier to identify the equilibrium rate after the business cycle is over than in real time. In th e last two tightening cycles, the Fed started slow, but even tually pushed real rates well above their historic averages.

    Last, but not least, we are skeptical about the secu lar stagnation argument. We see two problems as it relates to the current recovery. First, it does not distinguish between a medium-term post-crisis problem and permanent stagnation. Clearly this is not a normal business cycle where a big collapse is followed by a big recovery (Exhibit 4.20) . As Reinhart and Rogoff (2014) and many others note, when there is a systemic crisis both the recessio n and the recovery are different than in a normal business cycle. Summarizing 100 such episodes, they find that GDP typically falls by 10.3% and it typical takes 8.4 years to recover to pre-crisis levels. Their “severity index”-adding the absolute value of these numbers together-averages 19.6 for all 100 cases.

    Is history repeating itself? Most of these cycles preda te the modern era of automatic stabilizers and countercyclical fiscal and monetary policy. They also ignore the special status of the US as the center of capital markets. And they don’t attempt t o gauge the relative strength of the policy response to each crisis. Nonetheless, these historic averages are a good starting point for analyzing the current period. Indeed, as the last line of the table shows, t he US has done much worse than following a normal recession, but measured in comparison to previous such c ycles, the US has done quite well, with a smaller recession, a quicker recovery and a much smal ler “severity index.”

    These systemic crises unleash extended periods of delever aging and balance sheet repair. How long this impairs aggregate demand presumably depends o n the speed of the healing process. This also suggests that the effectiveness of monetary policy shoul d be judged by balance sheet repair as well as the speed of growth in the economy.

    15 See Harris (2014).

    Judging from a variety of metrics, easy policy seems to have accelerated the healing process:

    -Banks are in better shape, with more capital, a lot l ess bad debt and with the ability to withstand serious stress tests.

    – The housing market has worked off most of its bad loa ns and both price action and turnover rates are back to normal.

    – There has a been a full recovery of the ratio of ho usehold net worth to income, the debt-to- income ratio has tumbled, and debt service has dropped to the lowest of its 34 year history (Exhibit 4.21).

    – High-yield companies have been able to refinance an d avoid defaults despite a feeble recovery.

    In our view, these metrics suggest the balance sheet re pair is well advanced.

    A second problem with the secular stagnation argument is that it ignores the role of fiscal policy in driving aggregate demand. This economic recovery has seen major fiscal tightening, starting at the state and local level and then shifting to the federa l level. Despite the weak economic recovery, the 5.5 pp improvement in deficit to GDP ratio from 2011 to 2014 was by far the fastest consolidation in modern US history (Exhibit 4.22). A number of recen t studies suggest that fiscal policy is particularly potent when interest rates are stuck at the zero lowe r bound.

    Adding to the headwinds, this consolidation has been ac companied with a series of confidence shaking budget battles, including a “fiscal cliff” an d repeated threats of default or shutdown. The result is a series of spikes in the “policy uncertainty index” developed by Baker, Bloom and Davis (2013) (Exhibit 4.23). It is a bit odd to have a Keynesian theory of inadequate demand such as “secular stagnation” that does not include a discussion of the r ole of contractionary fiscal policy in creating tha t shortfall.

    4H. Summary: the new equilibrium

    In some ways the received wisdom on the economy has co me full circle: the optimistic “Great Moderation” has been replaced with its near-opposite, “Secular Stagnation.” The truth seems to be somewhere in between. Some of the moderation was earn ed at the expense of asset bubbles. Some of the stagnation is cyclical. If our narrative is corre ct, the weak economic recovery of the past five year s is not evidence of secular stagnation, but is evidence o f severe medium-term headwinds. The real test is happening as we speak: with significant healing from the 2008-9 crisis, will the recent pick-up in growth continue, creating a full recovery in the economy? And will the economy withstand higher interest rates? Judging from the previous three business cycles (and recent growth data!), we think the answer to both questions is “yes.”

    16 See Christiano, Eichenbaum, and Rebelo, (2011). O ne of the ironies of the secular stagnation debate i s that some of its strongest advocates are also strong suppo rters of more simulative fiscal policy. For example , Krugman (2014) argues that the recent actions in Washington h ave been like someone hitting themselves with a base ball bat and now that the beating is over the economy is doing better.













    Ang, Andrew A., and Geert Bekaert, 2002, “Regime Switches in Interest Rates,” Journal of Business and Economic Statistics20, 163-182.

    Andrews, Donald W. K., 1993, “Parameter Instability and Structural Change with Unknown Change Point,” Econometrica61(4), 821-856.

    Andrews, Donald W.K., 2003, “Tests for Parameter Instability and Structural Change with Unknown Change Point: A Corrigendum,” Econometrica71(1), 395-397.

    Angry Bear, 2009, “Current Recession vs the 1980-82 Recession,” http://angrybearblog.com/2009/06/current-recession-vs-1980-82-recession.html

    Baker, Scott R., Nicholas Bloom, and Steven J. Davis, 2013, “Measuring Economic Policy Uncertainty,”, manuscript, University of Chicago.

    Bai, Jushan and Pierre Perron, 1998, “Testing for andEstimation of Multiple Structural Changes,” Econometrica66(1), 47-78

    Bai, Jushan, and Pierre Perron, 2003, “Computation and Analysis of Multiple Structural Change Models,” Journal of Applied Econometrics18, 1-22.

    Balke, Nathan and Robert J. Gordon, 1989, “The Estimation of Prewar GNP: Methodology and New Evidence,” Journal of Political Economy97, 38-92.

    Barsky, Robert B., 1987, “The Fisher Hypothesis and the Forecastability and Persistence of Inflation,” Journal of Monetary Economics 19, 3-24.

    Barsky, Robert, Alejandro Justiniano, and Leonardo Melosi, 2014, “The Natural Rate of Interest and Its Usefulness for Monetary Policy,” American Economic Review: Papers & Proceedings104(5): 37-43.

    Bernanke, Ben S., 2005, “The Global Savings Glut andthe U.S. Current Account Deficit,” Sandridge Lecture (http://www.federalreserve.gov/boarddocs/speeches/2005/200503102/).

    Campbell, John Y. and John Cochrane, 1999, “By Force of Habit: a Consumption-based Explanation of Aggregate Stock Market Behavior,” Journal of Political Economy107 (2), 205–251.

    Canzoneri, Matthew B., Robert E. Cumby, Behzad T. Diba, 2007, “Euler Equations and Money Market Interest Rates: a Challenge for Monetary Policy Models,” Journal of Monetary Economics54, 1863-1881.

    Caballero, Ricardo, 2006, “On the macroeconomics of asset shortages,” NBER Working Paper 12753

    Caporale, Tony, and Kevin B. Grier, 2000), “Political Regime Change and the Real Interest Rate,” Journal of Money, Credit, and Banking32, 320-334.

    Christiano, Lawrence J., Martin Eichenbaum and Charles L. Evans, 2005, “Nominal Rigidities and the Dynamic Effects of a Shock to Monetary Policy,” Journal of Political Economy113, 1-45.

    Clarida, Richard, 2014, “Navigating the New Neutral”, Economic Outlook, PIMCO, November.

    Clarida, Richard, Jordi Galí and Mark Gertler, 2002, “A Simple Framework for International Monetary Policy Analysis,” Journal of
    Monetary Economics49, 879–904.

    Clark, Gregory, 2005, “The Interest Rate in the VeryLong Run: Institutions, Preferences, and Modern Growth,” working paper, U.C. Davis.

    Clark, Todd E. and Sharon Kozicki, 2005, “Estimatingequilibrium real interest rates in real time,” North American Journal of Economics and Finance16, 395-413.

    Curdia, Vasco, Andrea Ferrero, Ging Cee Ng, and Andrea Tambalotti, 2014, “Has U.S. Monetary Policy Tracked the Efficient Interest Rate?”, FRB San Francisco Working Paper 2014-12.

    De Paoli, Bianca and Pawel Zabczyk, 2013, “CyclicalRisk Aversion, Precautionary Saving, and Monetary Policy,” Journal of Money, Credit and Banking45(1), pages 1-36.

    Dennis, Richard, 2009, “Consumption Habits in a New Keynesian Business Cycle Model,” Journal of Money, Credit and Banking41(5), 1015-1030.

    Ferguson Jr., Roger W., 2004, “Equilibrium Real Interest Rate: Theory and Application”, http://www.federalreserve.gov/boarddocs/speeches/2004/20041029/.

    Feroli, Michael E., Ethan S. Harris, Amir Sufi and Kenneth D. West, 2012, “Housing, Monetary Policy, and the Recovery,” Proceedings of the US Monetary Policy Forum, 2012, 3-52.

    Galí, Jordi, 2008, Monetary Policy, Inflation and the Business Cycle, Princeton: Princeton University Press.

    Garcia, Rene, and Pierre Perron, 1996, “An Analysis of the Real Interest Rate Under Regime Shifts,” Review of Economics and Statistics78, 111-125.

    Gordon, Robert J., 2012, “Is U.S. Economic Growth Over? Faltering Innovation Confronts the Six Headwinds,” NBER Working Paper No. 18315.

    Gordon, Robert J., 2014, “The Demise of U.S. Economic Growth: Restatement, Rebuttal, and Reflections, ” NBER Working Paper No. 19895.

    Greenspan, Alan, 2005, “Testimony of Chairman Alan Greenspan,” Committee on Banking, Housing and Urban Affairs, U.S. Senate

    Gürkaynak, Refet S., Brian Sack, and Jonathan H. Wright, 2010, “The TIPS Yield Curve and Inflation Compensation,” American Economic
    Journal: Macroeconomics2, 70-92.Hansen, Alvin, 1939, “Economic Progress and Declining Population Growth.” American Economic Review29(1), 1-15.
    Harris, Ethan, 2014, “The Great Wobble,” US EconomicWatch, B of A Merrill Lynch, October 13.

    Harris, Ethan S., Bruce C. Kasman, Matthew D. Shapiroand Kenneth D. West, 2009, “Oil and the Macroeconomy: Lessons for Monetary Policy” US Monetary Policy Forum.

    Hatzius, Jan, Sven Jari Stehn, and Jose Ursua, 2014, “Some Long-Term Evidence on Short-Term Rates,” Goldman Sachs US Economics Analyst, 14/25, June 20.

    Hatzius, Jan, “Housing Holds the Key to Fed Policy, 2006,” Goldman Sachs Global Economics Paper No. 137, February 2, 2006.

    Kocherlakota, Narayana R., 1996, “The Equity Premium: It’s Still a Puzzle,” Journal of Economic Literature34(1), 42-71.

    Krugman, Paul R., Kathryn M. Dominquez and Kenneth Rogoff, 1998, “Japan’s Slump and the Return of the Liquidity Trap.” Brookings Papers on Economic Activity1998(2), 137-205.

    Krugman, Paul R., 2013, “Secular Stagnation, Coalmines, Bubbles, and Larry Summers,” New York Times, http://krugman.blogs.nytimes.com/2013/11/16/secular-stagnation-coalmines-bubbles-and-larry-summers.

    Laubach, Thomas and John C. Williams, 2003, “Measuringthe Natural Rate of Interest,” The Review of Economics and Statistics85(4): 1063-1070 Leduc, Sylvain and Glenn D. Rudebusch, 2014, “Does Slower Growth Imply Lower Interest Rates?,” FRBSF Economic Letter 2014-33.

    McCulley, Paul, 2003, “Needed: Central Bankers with Far Away Eyes,” PIMCO, Global Central Bank Focus.

    Mehra, Rajnish and Edward C. Prescott, 2003, “The Equity Premium in Retrospect,” 889-938 in G.

    Constantinides, M. Harris and R. Stultz, (eds) Handbook of the Economics of Finance, vol 1B, Amsterdam: Elsevier.

    Misra, Priya et. al., 2015, “A brave new World,” US Rates Weekly, Bank of America Merrill Lynch, January 23.

    Mulligan, Casey, 2009, “Worse Than 1982?”, http://economix.blogs.nytimes.com/2009/06/03/worse-than-1982/?_r=0.

    Orphanides, Athanasios and John C. Williams, 2002, “Robust Monetary Policy Rules with Unknown Natural Rates,” Brookings Papers on Economic Activity2002(2), 63-145.

    Orphanides, Athanasios and John C. Williams, 2006, “Monetary Policy with Imperfect Knowledge,” Journal of the European Economic Association4 (2-3), 366–375.

    Orphanides, Athanasios and John C. Williams, 2007, “Robust Monetary Policy with Imperfect Knowledge,” Journal of Monetary Economics, 54, 1406-1435.

    Rapach, David E., and Mark E. Wohar, 2005, “Regime Changes in International Real Interest Rates: Are They a Monetary Phenomenon?,” Journal of Money, Credit and Banking37( 5) 887-906.

    Reinhart, Carmen M. and Kenneth S. Rogoff, 2009, This Time Is Different: Eight Centuries of Financial Folly.

    Princeton, N.J.: Princeton University Press. Reinhart, Carmen M., and Kenneth S. Rogoff,. 2014,”Recovery from Financial Crises: Evidence
    from 100 Episodes,” American Economic Review: Papers and Proceedings104( 5), 50-55.

    Reinhart, Carmen M., and M. Belen Sbrancia, 2015, “The Liquidation of Government Debt,” forthcoming, Economic Policy.

    Romer, Christina D., 1989, “The Prewar Business Cycle Reconsidered: New Estimates of Gross National Product, 1869-1908,” Journal of Political Economy 97, 1-37.

    Smets, Frank and Raf Wouters, 2003, “An Estimated Dynamic Stochastic General Equilibrium Model of the Euro Area,” Journal of the European Economic Association1(5), 1123-1175.

    Summers, Lawrence, 2013a, “Reflections on the ‘New Secular Stagnation Hypothesis,” 27-39 in C. Teulings and R. Baldwin (eds.),

    Secular Stagnation: Facts, Causes, and Cures, (eBook, www.voxeu.org/sites/default/files/Vox_secular_stagnation.pdf), CEPR.

    Summers, Lawrence, 2013b, “Larry Summers Remarks at IMF Annual Research Conference,” https://www.facebook.com/notes/randy-fellmy/transcript-of-larry-summers-speech-at-the-imf-economic-forum-nov-8-2013/585630634864563. Summers, Lawrence, 2014, “U.S. Economic Prospects: Secular Stagnation, Hysteresis, and the Zero Lower Bound,” Business Economics Vol. 49, No. 2.

    Taylor, John B., 1993, “Discretion versus policy rules in practice,” Carnegie-Rochester Conference Series on Public Policy 39, 195–214.

    Taylor, John B., 1999, “A Historical Analysis of Monetary Policy Rules”, 319-341 in J. B. Taylor, ed., Monetary Policy Rules, Chicago:
    University of Chicago Press.

    Weil, Philippe, 1989, “The Equity Premium Puzzle and the Risk-free Rate Puzzle,” Journal of Monetary Economics24,401-421.

    White, Halbert, 1980, “A Heteroskedasticity-ConsistentCovariance Matrix Estimator and a Direct Test for Heteroskedasticity,” Econometrica
    48, 817-838.

    Wieland, Johannes, 2014, “Are Negative Supply Shocks Expansionary at the Zero Lower Bound?,” working paper, UCSD.

    Zarnowitz, Victor, 1997, “Appendix”, 731-737 in Glasner, D. (ed.) Business Cycles and Depressions: An Encyclopedia, Garland Publishing: New York




































  5. Francis Longstaff paper


    Francis Longstaff

    Working Paper 20589


    1050 Massachusetts Avenue
    Cambridge, MA 02138
    October 2014

    Francis A. Longstaff is with the UCLA Anderson School and the NBER, and is a consultant to Blackrock. I am grateful for helpful discussions with Maureen Chakraborty and Stephen Schurman. All errors are my responsibility. The views expressed herein are those of the author and do not necessarily reflect the views of the National Bureau of Economic Research.

    NBER working papers are circulated for discussion and comment purposes. They have not been peer- reviewed or been subject to the review by the NBER Board of Directors that accompanies official NBER publications.

    © 2014 by Francis Longstaff. All rights reserved. Short sections of text, not to exceed two paragraphs, may be quoted without explicit permission provided that full credit, including © notice, is given to the source.

    Valuing Thinly-Traded Assets
    Francis Longstaff
    NBER Working Paper No. 20589
    October 2014
    JEL No. G12,G32


    We model illiquidity as a restriction on the stopping rules investors can follow in selling assets, and apply this framework to the valuation of thinly-traded investments. We find that discounts for illiquidity can be surprisingly large, approaching 30 to 50 percent in some cases. Immediacy plays a unique role and is valued much more than ongoing liquidity. We show that investors in illiquid enterprises have strong incentives to increase dividends and other cash payouts, thereby introducing potential agency conflicts. We also find that illiquidity and volatility are fundamentally entangled in their effects on asset prices. This aspect may help explain why some assets are viewed as inherently more liquid than others and why liquidity concerns are heightened during financial crises.

    Francis Longstaff
    Anderson Graduate School of Management
    110 Westwood Plaza, Box 951481
    Los Angeles, CA 90095-1481
    and NBER
    [email protected]


    Thinly-traded assets are often defined as investments for which there is no liquid market available. Thus, investors holding illiquid or thinly-traded assets may not be able to sell their positions for extended periods, if ever. At best, investors may only be able to sell in infrequent privately-negotiated transactions. The eco- nomics of these private transactions, however, are complicated since prospective buyers realize that they will inherit the same problem when they later want to resell the assets. Not surprisingly, sales of thinly-traded assets typically occur at prices far lower than would be the case if there was a liquid public market. The valuation of thinly-traded assets is one of the most important unresolved issues in asset pricing. One reason for this is that thinly-traded assets collectively represent a large fraction of the aggregate wealth in the economy. Key examples where investors may face long delays before being able to liquidate holdings include:

    – Sole proprietorships.
    – Partnerships, limited partnerships.
    – Private equity and venture capital.
    – Life insurance and annuities.
    – Pensions and retirement assets.
    – Residential and commercial real estate.
    – Private placements of debt and equity.
    – Distressed assets and fire sales.
    – Compensation in the form of restricted options and shares.
    – Investments in education and human capital.
    Other examples include transactions that take public firms private such as a leveraged buyouts (LBOs) that result in residual equityholders having much less liquid positions. Many hedge funds have lock up provisions that prohibit investors from withdrawing their capital for months or even years. Investors in initial public offerings (IPOs) are often allocated shares with restrictions on reselling or “flipping” the shares.

    Many insightful approaches have been us ed in the asset pricing literature to study the effects of illiquidity on security prices. Important examples include Amihud and Mendelson (1986), Constantinides (1986), Vayanos (1998), Vayanos and Vila (1999), Acharya and Pedersen (2005) and others who model the relation between asset prices and transaction co sts. Duffie, Garleanu, and Pedersen (2005, 2007) study the role that search costs may play in the valuation of securities in illiquid markets. Gromb and Vayanos (2002) and Brunnermeier and Pedersen (2009) consider how the funding constraints faced by market participants can affect market liquidity and security values. Shleifer and Vishny (1992, 2011), Coval and Stafford (2007), and others focus on the effects of financial constraints on security prices in fire sales and forced liquidations. Longstaff (2009) solves for equilibrium security prices in a model when agents can only trade intermittently.

    This paper approaches the challenge of valuing illiquid assets from a new perspective. Specifically, we view illiquidity as a restriction on the stopping rules that an investor is allowed to follow in selling the asset. This approach allows us to use an option-theoretic framework to place realistic lower bounds on the values of securities that cannot be traded continuously. Intuitively, these bounds are determined by solving for the value of an option that would compensate an investor for having to follow a buy-and-hold strategy rather than being able to follow an optimal stopping strategy in selling the asset.

    There are many reasons why having a lower bound on the value of an illiquid asset could be valuable. For example, the lower bound could serve as a reser- vation price in negotiations between sellers and prospective buyers. Having a lower bound on the value of illiquid assets held by financial institutions can pro- vide guidance to policymakers in making r egulatory capital decisions. The lower bound also establishes limits on the collateral value of illiquid or thinly-traded assets used to secure debt financing or held in margin accounts. Recent changes to generally accepted accounting principles (GAAP) explicitly acknowledge that firms holding illiquid assets may need to base their valuations on unverifiable es- timates. 1 These lower bounds provide us a conservative but much more objective standard for valuing these types of illiquid assets.

    The results provide a number of important insights into the potential effects of illiquidity on asset values. First, we show that the value of immediacy in financial markets is much higher than the value of future liquidity. For example, the discount for illiquidity for the first day of illiquidity is 2.4 times that for the second day, 4.2 times that for the fifth day, 6.2 times that for the tenth day, and 20.0 times that for the 100th day. These results suggest that immediacy is viewed as fundamentally different in its nature. This dramatic time asymmetry in the value of liquidity may also help explain the rapidly growing trend towards electronic execution and high-freque ncy trading in many financial markets.
    Second, our results confirm that the values of illiquid assets can be heavily discounted in the market. We show that investors could discount the value of illiquid stock by as much as 10, 20, or 30 percent for illiquidity horizons of 1,
    1 For example, Statement of Financial Accounting Standards (SFAS) 157 allows for the use of unverifiable inputs in the valuation of a broad category of illiquid assets that are designated as Level 3 investments.

    2, or 5 years, respectively. Although our results only provide lower bounds on the values of illiquid assets, the evidence in the empirical literature suggests that these bounds may be realistic approximations of the prices at which various types of thinly-traded securities are sold in privately-negotiated transactions. For example, Amihud, Me ndelson, and Pedersen (2005) report that studies of the pricing of restricted letter stock find average discounts ranging from 20 to 35 percent for illiquidity horizons of one to two years. In addition, Brenner, Eldor, and Hauser (2001) find that thinly-traded currency options are placed privately at roughly a 20 percent discount to fully liquid options.

    Third, we find that the effects of illiquidity and volatility on asset prices are fundamentally entangled. Specifically, asset return variances and the degree of asset illiquidity are indistinguishable in their effects on discounts for illiquidity. This makes intuitive sense since investors are more likely to want to sell assets when prices have diverged significantly from their original purchase prices. This divergence, however, can arise both through the passage of time as well as through the volatility of asset prices. Because of this, assets with stable prices such as cash or short-term Treasury bills can be viewed as inherently more liquid than assets such as stocks even when all are readily tradable. This may also help explain why concerns about market liquidity become much more central during financial crises and periods of market stress.

    Finally, the results indicate that the effect of illiquidity on asset prices is smaller for investments with higher dividends or cash payouts. An important implication of this is that investors in illiquid assets such as private equity, ven- ture capital, leveraged buyouts, etc. have strong economic incentives to increase payouts. Thus, illiquidity may have the potential to be a fundamental driver of both dividend policy and capital structure decisions for private-held ventures or thinly-traded firms.

    The remainder of this paper is as follows. Section 2 reviews the literature on the valuation of illiquid assets. Section 3 describes our approach to modeling illiquidity. Section 4 uses this approach to derive lower bounds on the values of illiquid or thinly-traded assets. Section 5 discusses the asset pricing implications. Section 6 extends the results to assets that pay dividends. Section 7 summarizes the results and makes concluding remarks.


    The literature on the effects of illiquidity on asset valuation is too extensive for us to be able to review in detail. Instead, we will simply summarize some of the key themes that have been discussed in this literature. For an in-depth survey of this literature, see the excellent review by Amihud, Mendelson, and Pedersen (2005) on liquidity and asset prices.

    Many important papers in this literature focus on the role played by trans- action costs and other financial frictions in determining security prices. Amihud and Mendelson (1986) present a model in which risk-neutral investors consider the effect of future transaction costs in determining current valuations for assets. Constantinides (1986) shows that while transaction costs can have a large effect on trading volume, investors optimally trade in a way that mitigates the effect of transaction costs on prices. Heaton and Lucas (1996) study the effects of transaction costs on asset prices and risk sharing in an incomplete markets set- ting. Vayanos (1998) and Vayanos and Vila (1999) show that transaction costs can increase the value of liquid assets, but can have an ambiguous effect on the values of illiquid assets.

    Another important theme in the literature is the role of asymmetric informa- tion. Glosten and Milgrom (1985) model a market maker who provides liquidity and sets bid-ask prices conditional on the sequential arrival of orders from po- tentially informed agents. Brunnermeier and Pedersen (2005) develop a model in which large investors who are forced to sell are exploited via predatory trading by other traders, and show how the resulting illiquidity affects asset valuations.

    A number of recent papers recognize that liquidity is time varying and de- velop models in which liquidity risk is priced into asset valuations. Pastor and Stambaugh (2003) consider a model in which marketwide systemic liquidity risk is priced. Acharya and Pedersen (2005) show how time-varying liquidity risk affects current security prices and future expected returns. Gromb and Vayanos (2002) and Brunnermeier and Pedersen (2009) develop models in which changes in the abilities of dealers to fund their inventories translates into variation in the liquidity they can provide, which in turn results in liquidity risk premium being embedded into asset values.

    Another recent theme in the literature addresses the effects of search costs or the cost of being present in the market on liquidity and asset prices. Duffie, Garleanu, and Pedersen (2005), Vayanos (2007, 2008), and others consider models in which agents incur costs as they search for other investors willing to trade with them, and show how these costs affect security prices. Huang and Wang (2008a, 2008b) study asset pricing in a market where it is costly for dealers to be continuously present in the market and provide liquidity.

    A number of papers in the literature view illiquidity from the perspective of a limitation on the ability of an agent to trade continuously. Lippman and McCall (1986) define liquidity in terms of the expected time to execute trading strate- gies. Longstaff (2001) and Kahl, Liu, and Longstaff (2003) study the welfare effects imposed on investors by liquidity restrictions on assets. Longstaff (2009) presents a general equilibrium asset pricing model in which agents must hold asset positions for a fixed horizon rather than being able to trade continuously.

    Finally, several papers approach the valuation of liquidity from an option- theoretic perspective. Copeland and Galai (1983) model limit orders as an option given to informed investors. Chacko, Jurek, and Stafford (2008) value immediacy by modeling limit orders as American options. Ang and Bollen (2010) model the option to withdraw funds from a hedge fund as a real option. Ghaidarov (2014) models the option to sell equity securities as a forward-starting put option.

    The papers most similar to this one are Longstaff (1995) and Finnerty (2012) who present models in which investors are assumed to follow specific trading strategies which allows them to derive bounds on illiquid asset values. These papers, however, result in discounts for illiquidity with counterintuitive properties such as exceeding the value of the liquid asset, or not being monotonic in the illiquidity horizon. This paper differs fundamentally from these papers, in that we allow investors to follow optimal stopping strategies in making selling decisions. An important advantage of this is that it leads to bounds that are much more realistic.










    We model illiquidity as a restriction on the stopping rules that an investor can follow in selling asset holdings. We use this framework to derive realistic lower bounds on the value of illiquid and thinly-traded investments.

    A number of important asset pricing insights emerge from this analysis. For example, we show that immediacy plays a unique role and is much more highly valued than ongoing liquidity. In addition, we show that illiquidity can reduce the value of an asset substantially. For illiquidity horizons on the order of those common in private equity, the discount for illiquidity can be as much as 30 to 50 percent. Although large in magnitude, these discounts are consistent with the empirical evidence on the valuation of thinly-traded assets. Thus, these lower bounds could be useful in determi ning reservation prices and providing conservative valuations in situations where other methods of valuation are not available.

    Finally, we find that the discount for illiquidity decreases as the cash flow generated by the underlying asset increases. Thus, investors in private ventures may have strong incentives to increase dividends and other cash flows to reduce the impact of illiquidity on their holdings. This implies that the illiquid nature of investments in partnerships, private equity, venture capital, LBOs, etc. has the potential to introduce agency co nflicts as cash flow policy is impacted.


    Acharya, Viral V., and Lasse H. Pedersen, 2005, Asset Pricing with Liquidity Risk, Journal of Financial Economics 77, 375-410.

    Amihud, Yakov, and Haim Mendelson, 1986, Asset Pricing and the Bid-Ask Spread, Journal of Financial Economics 17, 223-249.

    Amihud, Yakov, Beni Lauterbach, and Haim Mendelson, 1997, Market Micro- structure and Securities Values: Evidence from the Tel Aviv
    Exchange, Journal of Financial Economics 45, 365-390.

    Amihud, Yakov, Haim Mendelson, and Lasse H. Pedersen, 2005, Liquidity and Asset Prices, Foundations and Trends in Finance 1, 269-364.

    Ang, Andrew, and Nicolas Bollen, 2010, Locked Up by a Lockup: Valuing Liq- uidity at a Real Option, Financial Management 39, 1069-1095 Berkman, H., and V. R. Eleswarapu, 1998, Short-term Traders and Liquidity: A Test Using Bombay Stock Exchange Data, Journal of Financial Economics 47, 339-355.

    Brenner, Menachem, Rafi Eldor, and Schmuel Hauser, 2001, The Price of Options Illiquidity, Journal of Finance 56, 789-805.

    Brunnermeier, Markus, and Lasse H. Pedersen, 2005, Predatory Trading, Journal of Finance 60, 1825-1863.

    Brunnermeier, Markus, and Lasse H. Pedersen, 2009, Market Liquidity and Fund- ing Liquidity, Review of Financial Studies 22, 2201-2238.

    Copeland, Thomas E., and Dan Galai, 1983, Information Effects on the Bid-Ask Spread, Journal of Finance 38, 1457-1469.

    Chacko, George C., Jakub W. Jurek, and Erik Stafford, 2008, The Price of Immediacy, Journal of Finance 63, 1253-1290.

    Constantinides, George M., 1986, Capital Market Equilibrium with Transaction Costs, Journal of Political Economy 94, 842-862.

    Demsetz, Harold, 1968, The Cost of Transacting, Quarterly Journal of Economics 82, 33-53.

    Duffie, Darrell, Nicolae Garleanu, and Lasse H. Pedersen, 2005, Over the Counter Markets, Econometrica 73, 1815-1847.

    Finnerty, John D., 2012, An Average Strike Put Option Model of the Marketabil- ity Discount, it Journal of Derivatives 19, 53-69.

    Ghaidarov, Stillian, 2014, Analytical Bound on the Cost of Illiquidity for Equity Securities Subject to Sale Restrictions, The Journal of Derivatives 21, Summer, 14 No. 4, 31-48.

    Glosten, Lawrence R., and Paul R. Milgrom, 1985, Bid, Ask, and Transaction Prices in a Specialist Market with Heterogeneously Informed Traders, Journal of Financial Economics 14, 71-100.

    Gromb, Denis, and Dimitri Vayanos, 2002, Equilibrium and Welfare in Markets with Financial Constrained Arbitrageurs, Journal of Financial Economics 66, 361-407.

    Grossman, Sanford J., and Merton H. Miller, 1988, Liquidity and Market Struc- ture, Journal of Finance 43, 617-633.

    Heaton, John, and Deborah Lucas, 1996, Evaluating the Effects of Incomplete Markets on Risk Sharing and Asset Pricing, Journal of Political Economy 104, 443-487.

    Huang, Jennifer, and Jiang Wang, 2009, Liquidity and Market Crashes, Review of Financial Studies 22, 2607-2643.

    Juang, Jennifer, and Jiang Wang, 2010, Market Liquidity, Asset Prices, and Welfare, Journal of Financial Economics 95, 107-127.

    Kahl, Matthias, Jun Liu, and Francis A. Longstaff, 2003, Paper Millionaires: How Valuable is Stock to a Stockholder Who is Restricted from Selling it?, Jour- nal of Financial Economics 67, 385-410.

    Lippman, Steven A., and John J. McCall, 1986, An Operational Measure of Liquidity, American Economic Review 76, 43-55.

    Longstaff, Francis A., 1995, How Much Can Marketability Affect Security Val- ues?, Journal of Finance 50, 1767-1774.

    Longstaff, Francis A., 2001, Optimal Portfolio Choice and the Valuation of Illiq- uid Securities, Review of Financial Studies 14, 407-431.

    Longstaff, Francis. A., 2009, Portfolio Claustrophobia: Asset Pricing in Markets with Illiquid Assets, American Economic Review 99, 1119-1144.

    Pastor, Lubos, and Robert Stambaugh, 2003, Liquidity Risk and Expected Stock Returns, Journal of Political Economy 111, 642-685.

    Silber, William L., 1991, Discounts on Restricted Stock: The Impact of Illiquidity on Stock Prices, Financial Analysts Journal 47, 60-64.

    Stoll, Hans R., 2000, Friction, Journal of Finance 55, 1479-1514.

    Vayanos, Dimitri, 1998, Transaction Costs and Asset Prices: A Dynamic Equi- librium Model, Review of Financial Studies 11, 1-58.

    Vayanos, Dimitri, and Jean-Luc Vila, 1999, Equilibrium Interest Rate and Liq- uidity Premium with Transaction Costs, Economic Theory 13, 509-539.

    Vayanos, Dimitri, and Tan Wang, 2007, Search and Endogenous Concentration of Liquidity in Asset Markets, Journal of Economic Theory 136, 66-104.

    Vayanos, Dimitri, and Pierre-Olivier Weill, 2008, A Search-Based Theory of the On-the-Run Phenomenon, Journal of Finance 63, 1361-1398.









  6. Richard R. Lindsay paper

    Forced Liquidations, Fire Sales, and the Cost of Illiquidity


    Richard R. Lindsey & Andrew B. Weisman
    Q Group
    October, 18, 2015

    This presentation is for information purposes only and should not be used or construed as an offer to sell, a solicitation of an offer to buy, or a recommendation for any security. There is no guarantee that the information supplied is accurate, complete, or timely, nor do es it make any warranties with regards to the results obtained from its use. It is not intended to indicate or imply in any manner that curr ent or past results are indicative of future profitability or expectations. As with all investments, there are inherent risks that individuals would need to consider .

    The views expressed are those of the speaker and do not necessarily reflect the views of others in the Janus organization.

    This material may not be reproduced in whole or in part in any form, or referred to in any other publication, without express writ te n permission. Janus is a registered trademark of Janus International Holding LLC. © Janus International Holding LLC.

    Once Upon a Time…

    There was an (almost) magical hedge fund with high returns and low volatility…


    Once Upon a Time…


    But subprime mortgage delinquencies grew, and the value of securities held by the fund dropped…

    Once Upon a Time…

    The Prime Brokers for the fund asked for more cash collateral…

    The fund tried to liquidate assets in a declining market to meet the collateral calls…

    But asset values continued to decline quickly while collateral requirements continued to rise…

    Once Upon a Time…

    The fund failed even though its parent company attempted to stabilize it with a substantial cash injection…

    Investors were returned 9₵ on the dollar…

    And the managers lived happily ever after…

    Portfolio Construction

    Typical approach is to diversify across securities and strategies,
    using the common “currencies”

    Looking for low correlation and low volatility
    Low volatility and correlation often an “accounting artifact”
    Drawn to securities with limited price discovery

    Investors tend to believe in a “liquidity premium” that compensates them for illiquidity

    Liquidity in Portfolios

    Lo, et al (2003)
    Add liquidity as additional constraint in mean-variance optimization

    Seigel (2008); Leibowitz & Bova (2009)
    Consider liquidity in determining portfolio weights

    Ang, et al (2011)
    Optimal liquidity policy with market frictions

    Kinlaw, et al (2013)
    Liquidity as a shadow allocation in the portfolio

    Serial Correlation & Liquidity

    Illiquid portfolios tend to exhibit a high degree of positive serial correlation (Weisman (2003); Getmansky et al (2004))

    Methods:Scholes & Williams (1977); Geltner (1993); Getmansky, et al (2004); Bollen & Poole (2008); Anson (2010); Anson (2013)


    Adjust the time series for serial correlation Decode the performance to adjust volatility and correlations

    Illiquidity: The Cost is Ignored

    Primary Question:Are under-reported volatility and correlation a benign consequence of illiquidity or is there more to it?

    What should concern you most as an investor?

    We argue that simply adjusting for serial correlation fails to measure or capture the core risk and cost of illiquidity that investors should care about: forced liquidations and “fire sales”

    Causes of Illiquidity

    A mismatch between the funding of an underlying investment and the horizon over which the investment can be sold

    Leverage/Financing:(Garleanu& Pedersen (2009); Brunermeier & Pedersen (2009); Office of Financial Research (2013))
    –Including swaps, futures, margin

    Contractual terms: (Ang & Bollen(2010))
    –Gates, lock ups, notice periods

    Network factors :(Battacharya, et al (2013); Gennaioli, et al (2012); Boyson, et al (2010); Mitchell, et al (2007); Chen, et al (2012); Schmidt, et al (2013))
    –Common service providers (custodians, prime brokers, securities lending counterparties)
    –Unanticipated strategy correlation
    –Common investors

    Liquidity & Reality

    The true value of the portfolio assumed to follow a discrete Brownian motion:


    Bayesian process of adjusting some proportion of the distance between prior period’s valuation and what it’s perceived to be worth in the current period (Quanand Quigley (1991))

    Liquidity & Reality

    The observed (reported) return is a function of:
    –The trend rate of return
    –The realized volatility
    –The under/over-valuation of the prior period


    Liquidity & Reality


    (Not the only method for deriving this prior: common sense “sanity checks” also useful…)

    Smoothed Value

    How are Nt (true value) and Rt(reported value) related?

    Smoothed Value

    Illiquidity systematically drives under/over-valuation

    Under-valuation not so critical, over-valuation more of an issue:

    Interested third parties will not allow a portfolio valuation to exceed a rational tradable value by more than a “reasonable” margin

    Prime brokers that extend credit, monitor reported valuations as assets serve as collateral

    We refer to this margin as the “credibility threshold” (L)

    L effectively determined by the first interested third party such as Prime Brokers or investors to act;

    Smoothed Value


    Smoothed Value

    Exceeding the credibility threshold triggers forced behavior (selling)

    May result in a large single period loss governed by:
    The portfolio overvaluation (Rt-Nt)
    A liquidation penalty (P)

    Such losses relatively frequent and tend to be larger than conventional data-dependent methods such as VaR or CVaR

    The magnitude and frequency (not the timing) are reasonably predictable, and can be pricedby formalizing the basic structural dynamics

    Barrier Option Framework

    Simulate the “true” value of portfolio using discrete BM which is a function of:
    Simulate 100k times and calculate the mean NPV of all the one-year paths (including those which do not cause liquidation)
    This naturally translates into a “haircut” against the observed return and represents a de facto price for investing in a less liquid portfolio

    Option Value: L & λ


    Option Value: P & λ


    Option Value: P & L


    Option Sensitivities


    Additional Considerations

    The option value is not a liquidity premium, rather it is the calculated cost of price smoothing an illiquid portfolio when combined with a triggering event, that may result in an abrupt sale into a declining market

    When the portfolio is illiquid, managers generally do not have the flexibility to avoid these dynamics

    Parameter Considerations

    In cases of fraud or collapse, transactions in the secondary market for hedge funds have an average discount to NAV
    of 49.6% (Ramadorai(2008))

    JPMorgan (2012)
    Hedge funds expected return 5% to 7%
    Hedge funds expected volatility 7% to 13%

    Private equity expected returns 9%
    Private equity expected volatility 34.25%

    Are these sufficient returns given the volatility?

    Pricing Liquidity in Alternative Investments (Indices)


    Measured serial correlation for most of these lie in the 50% to 60% range

    Managers are typically reflecting less than 50% of the true change in the value of their portfolios

    Depending on assumptions concerning other parameters, the option value could be quite significant!

    Example: Emerging Market liquidity option: 13.52%Observed return: 17.3%, Liquidity-adjusted return: 3.78%

    Pricing Liquidity in Alternative Investments (Funds)

    Morningstar-CISDM Hedge Fund Database (contains both live and dead funds)
    Eliminated CTAs and Fund of Funds
    At least 24 months of return history
    Autocorrelation of 0.01 or higher
    Eliminate the last 3 months of data for each manager

    3,554 hedge funds
    Average Option value was 5.52%Implying an average Liquidity-adjusted mean return of 6.27%

    Pricing Liquidity in Alternative Investments (Funds)


    Option Values vs Drawdowns


    The Poster Child

    The (almost) magical fund: Bear Stearns High-Grade Structured Credit Strategies

    µ=12.4%    σ=1.5%    λ=0.3635

    Option value close to $0, but…

    The standard deviation for the HFRI Fixed Income–Asset Backed Index: 4.03%

    The Bear Stearns Fund was showing ≈ 1/3 of the index volatility

    The Poster Child

    As the fund’s volatility approached the index volatility, the option cost exploded


    Summary & Conclusion

    Adjusting for serial correlation fails to measure or capture the core risk and cost of illiquidity: forced liquidations and “fire sales”

    A barrier option model provides a straight-forward method of combining priors about the market to price this core risk



  7. Edwin J. Elton paper

    Target Date Funds:

    Characteristics and Performance

    Edwin J. Elton
    Stern School of Business, New York University

    Martin J. Gruber
    Stern School of Business, New York University

    Andre de Souza
    Fordham University

    Christopher R. Blake
    Fordham University


    Christopher R. Blake, formerly Joseph Keating, S.J. Distinguished Professor, Fordham University, is deceased prior to publication.Send correspondence to Martin J. Gruber, Stern School of Business, New York University, 44 West 4th Street, New York, NY 10012; telephone 212-998-0333; fax 212-995-4233. E-mail: [email protected]


    As a result of poor asset allocation decisions by 401 ( k ) participants , 72% of all plans now offer target date funds , and participants heavily invest in them . Here, we study the characteristics and performance of TDFs , providing a unique view by employ ing data on TDFs holdings . . We show that additional expenses charged by TDFs are largely offset by the low – cost share classes they hold , not normally open to theirinvestors . Additionally, TDFs are very active in their allocation decisions and increasingly bet on nonstandard asset classes.However, TDFs do not earn alpha from timing or their selection of individual assets. ( JEL G11. G23. ) .

    There is a vast literature in financial economics that finds that participants in 401 ( k ) and 403 ( b ) plans generally make suboptimal asset allocation decisions. 1 In response to this evidence, plans have started to offer options in which the asset allocation de cision is made for the investor and in particular options in which the allocation changes as a function of time to retirement.These la t ter options are referred to as target date funds (TDF).

    Target date funds have become an important component of pension plans. The growing use of TDFs i s no doubt helped by the Department of Labor expanding the set of acceptable default options to include TDFs. In 2011, 72% of the 401 ( k ) pla ns offered target date funds, and by 2012 , 41% of 401 ( k ) investor s held target date funds, and 20% of all 401 ( k ) assets were inve sted in target date funds (VanDerhei et al. 2012, 2013 ; Barons 2014 ). T arget date funds are rapidly growing in importance for 401 ( k ) investors. From 2008 to 2012 , the funds grew from 160 to 481 billion in assets , with 91% of these assets in retirement plans . In addition, 43% of the assets of recent ly hired employees in their 20s with 401 ( k ) plans are invested in tar g et dat e funds ( VanDerhei et al. 2012, 2013).

    All of the target date funds that exist have a goal of reducing the percentage invested in stocks over time; yet, t he theoretical and empirical support for this asset allocation pattern is mixed. Samuelson (1963) and Merton (1969) derive results demonstrating that constant proportions are optimal. A number of subsequent authors have derived conditions in which a change in stock proportions is optional. These conditions often involve assumptions about labor income (see, for example, Bodie, Merton , and Samuelson 1991 ; C ampbell et al. 2 001 ; Campbell and Vic e ira 2002 ; Cocco, Gomes , and Maenhout 2005). Based on assumptions about labor

    1 See, for example, Ameriks and Zeldes (2004), Benartzi and Thaler (2001), Madrian and Shea (2001), Agnew and Balduzzi (2002, 2003), Liang and Weisbenner (2006), Huberman and Sengmuller (2003), and Elton, Gruber , and Blake (2007).

    income , the authors derive an optimum decrease in stock al location over time. On the other hand, Shiller (2005) has postulated con ditions under which an increase in stock proportions is optimal.

    There is little empirical evidence for the optimality of decreasing stock allocation and increasing bond allocation over time. Poterba et al. (2005 , 2009 ) simulate wealth and utility of wealth at retirem ent and find for most investors 100% in equity or a fixed proportion strategy dominates increasing the proportion invested in fixed income as the target date nears.

    This paper does no t attempt to add to the important literature on the optimal pattern of allocation over time.Rather , it addresses a second set of issues: given the intended allocation of a TDF is the investor well – served by management ‘ s selection of assets .

    Several papers are related to ours .Bhattacharya, Lee , and Pool (201 3) show th at fund s of funds (which include TDFs) increase their investment in funds with in their fund family after experienc ing large outflows.We ex amine only TDFs , and we do not fi nd similar results.Balduzzi and Reuter (2013) (using a two – index model) show that TDFs with th e same target date have very different bond stock mixes , total returns , and residual returns .We also find great variation in the bond stock mix across funds with the same target date.However, our analysis goes well beyond Balduzzi and Reuter ‘s (2013) in that we examine many other characteristics of TDFs and place great emphasis o n understanding their expenses and performance . Sandya (2011) exa mines several of the issues we also examine. She compares the performance of target date funds and balanced funds. Her conclusions principally use the alpha from a time – series regression of returns of TDFs on bond and stock factors to examine performance and infer management behavior. We collect data on and use the holdings of TDF s to directly see management actions.We e mploy a methodology estimating alphas and factor sensitivities that corrects for the changing risk profiles of TDFs over time and explicitly adjusts for the increased use of new asset categories by TDFs 5 over time.Using this methodology , our results on issu es that are in common differ from man y o f those reported in Sandya.

    This is the first study to examine in detail the holdings of TDFs to determine what they are doing and how well they are doing it. Our study is unique in that we have data on the exact hol dings of each target date fund , as well as return and expense data not only on each target date fund but also on each of the funds h e ld .This impacts the type of analysis we perform and the conclusion s we reach throughout this paper. In the first sectio n we will discuss our sample . T his is followed by a section discussing the holding s of target date funds and the expenses associated with their holdings . More specifically , we examine the co mposition of the assets held by target date fund s and how this has change d over time. We find that the actual composition is substantially different from how TDFs have been characterized.

    Most t arget date funds hold a series of other funds: either publicly traded mutual funds or master trusts. T hese t arget date funds are funds of funds , and investors pay fees on the underlying funds and usually pay an added fee to the target date fund itself , the size of which depends on the share class that an investor purchases. In the second part of Section 2 we examine the size of these f ees.

    We find that most target date funds invest in low – cost share classes of mutual funds which are not available to all but very wealthy investors or institutional investors. We show that the underlying share classes have su fficient ly lowexpense ratios so that for most investors the cost of buying the target date fund with its added expense ratio is on ly slightly higher than if an investor bought the underlying funds directly.

    In the third section we examine performance.There are two aspects of performa nce : tim ing and asset selection. Most target date funds start with a planned series of asset allocations over time, but then vary from the plan depending on perceived market

    conditions . we show that the stock bond timing decisions do not enhance the performa nce of TDFs . They detract from it , if anything .

    The other aspect of performance is asset selection. We find that TDFs have negative al phas similar to those found for mutual funds in general. Finally , we find that a simple strategy of investing in index funds at the initial allocation of the TDF provides lower risk, higher returns, and higher Sharpe ratios than those associated with the TDF.

    A number of authors have provided evidence that some mutual funds behave in ways that hurt shareholder per formance but help to meet fund family objectives.Given that we have holdings data, we can examine some of these issues directly and we do so in Section 4 .


    There are cl ose to 1,100 target date funds listed in Morningstar. Many of these represent dif ferent propo rtions of the same underlying mutual funds. For example, a fund family might offer TDFs for 2020, 2025, 2030 ,. . . 2050 , with several share classes offered for each of these horizons .The different dated TDFs from one fund family will usually hold most of its assets in the same underlying funds, though in different proportions. The principal difference is that the funds with target dates close to the present hold more in debt – type funds and less in equity – type funds. 2 The planned pattern of ass et allocation over time for a particular fund is usually referred to as its glide path. Table 1 shows the glide path for the Vanguard Target Date Fund. 3 If a particular TDF deviates from its glide path , a ll TDFs with different target dates from the same fu nd family are likely to deviate in a parallel manner from their glide paths since there is

    2 In addition, TDFs sometimes hold more risky securities, such as commodities and futures.
    3 Vanguard is unusual in that it is one of the few TDFs that gives numeric data for its intended glide path. Most TDFs simply present a picture of their glide paths.

    normally one management team handling all the fund families’ TDFs. Given this high commonality, we selected only one dated fund from the dates offered . We chose 2035 if it e xisted, and 2030 otherwise. Our final sample contained one target date from each fund family that offered target date funds. There are fifty families offering target date funds:40 of the funds ha ve a target date of 2 035 , and 10 have a target date of 203 0.There are intotal 229 funds in our sample , representing d ifferent share classes of the fifty distinct funds. 4 We chose 2035 as our baseyear as a trade – off between being on a part of the glide p ath where changes in the e q uity allocation are fairly constant and yet some changes do occur. We have data from 2004 through 2012.However , many of the funds started after 2004.

    Our sample is more recent than that used by Balduzzi and Reuter (2013) , and the majority of our data occur after the passage of the P ension Pro tection Act of 2006 .Our sample consists of two types of data.The first type is data at the TDF level.This includes monthly return data , yearly expense data , and the monthly investment by the TDF in each underlying fund.The second type is data at the level of the underlying fund .For each fund held by a TDF , we have monthly return, yearly expenses, and the Morningstar classification . The collection of underlying data allows us to perform analysis that ha s not been previously done .In particular , we preform four types : First , we calculate expenses on the TDF and the funds they hold and compare them to what an investor would have to pay if s/he constructed the TDF directly . Second , it allows us to see exactly what TDFs are investing in at a point in time and over time. Third , it a llows us to measure performance more accurately because we can account for change s in the betas on any TDF that occur because

    4 The difference between the 1,100 target date funds (with each share class counted as a fund) and the 229 in our final sample is due to our selecting one target date (2035 or 2030) from each fund family.

    of changes in the underlying fundsheld by the TDF . Fourth, by examining change s in holdings , we can explore the rati onale for management ‘s choices.

    2. Characteristics of Target Date Funds

    In this section we examine both the types of investments made by target date funds and the nature and size of the expenses associated with these funds.

    2.1 Holdings

    The typical TDF in ou r sample invests in 17 funds on average with 68% holding 10 or more and 24% holding 25 or more funds. This understates the actual diversity in holdings because the funds with few holdings generally hold master trusts , which themselves hold multiple types o f securities. Most target date funds are not the simple mixture of debt and equity envisioned in many paper s . This can be seen clearly from Table 2 . In addition to th e normal domestic and inte rnational debt and equity funds, a high percentage of target date funds have added emerging markets debt and equity funds, domestic and international real esta te funds, and commodity funds to their holdings. While one of these new investments, domestic real estate, was first held by TDFs i n 2004 , others appear later: comm odities appear in 2006 , and emerging market debt appears in 2007. Furthermore, a large percentage of target date funds that h eld these categories did not do so in their first year of existence. At the extreme , 81% of the fun ds that held international real estate added this category of investments one or more years after the fund started . T he percentage of TDF funds holding any category increased over time .For 2011 , the percentage of funds holding these new categories varied fro m a low of 22.2% for emerging market debt to a high of over 75% for emerging market equity. Recall that the majority of th e se funds have a target date of 2035 (or , in a few cases, 2030 ) , so that these target date funds are holding and increasing their inve stment in investment types that are thought of as being 9 inherently more risky as the target date approaches . In addition to the categories shown in Table 2, a number of funds made sector bets (19%) or count r y bets (8%) or held long – short funds (4%).

    What could account for these additional asset categories ? One explanation is that some TDFs were trying to differentiate themselves from other s . A second explanation is that these investment categories were identified as hot investment vehicles by the financia l community in general. A third explanation is that they added these investments with the belief that they lowered risk through diversification. 5 However, in the case of country and sector funds , this has to be a bet on a particular small subset of assets.

    The funds held by a TDF may be offered solely by the same family of funds as the TDFor include funds from another fund family . However , 63% of TDFs only hold funds offered by the fund family to which the TDF belongs. Most funds held outside the family a re ETFs or index funds. Only 13.7% hold any active funds outside the family, and these are almost always specialized funds , such as commodity funds , not offered by the family.In every case , funds held outside the family represent a very small percent of an y TDFs’ total investment.
    Each target date fund family chooses a glide path for each target date fund.The glide path specifies the percentage to invest in equity and debt over time.Across target date funds with the same target date offered by different fu nd families , the percentage invested in equity or debt has wide variation. The percent invested in debt and equity for TDFs with a target date of 2035 as of December 2011 is presented in Table 3.The lowest percent in equity held by any fund is 62% , wh ereas th e highest is 89%.Most 2035 target date funds hold equity in the range of 70 % to 85%.The amount invested in debt also varies with the bulk of targe t date funds holding between

    5 Although TDFs may be adding additional asset categories in an attempt to lower risk , it doesn o t seem to succeed. Sixty percent of the TDFs have higher risk than Vanguard , which only invests in stocks and bonds. Furthermore , in Section 4 we match the asset allocation of each target date fund with a portfolio only containing stocks and bonds , and in 75% of the cases , the TDF h as a higher standard deviation.

    8% and 20% of their investments in the form of debt with a low 4% and a high of 27%.Thus , at a point in time, target date funds vary widely in their debt and equity percentages , even though they are managed to meet the needs of the same age group. 6

    2.2 Expenses

    One of the key elements determining the performance of target date funds is th e expenses incurred by the holders of these funds. Since the individual or institutional investor can often construct (mimic) a target date fund, a question remains : how much is the investor paying in total expenses by holding a target date fund rather than holding a matched set of mutual funds ?

    The expenses on the target date funds consist of the expense ratio on the underlying mutual funds held by the target date funds and the expenses added on by the target date fund itself. This is somewhat complicated by the fact that the target date fund offers different classes of shares to different investors. While not all target date funds off er all classes, almost all offer more than one class. The overlay of TDF expenses differ across TDF share classes, but each share class of TDF holds the same class of underlying funds in the same proportions and incurs the same expense ratio on the underlying funds. For example, the no – load class of a TDF will hold the same class of the underlying funds in the same proportions as the retirement class and incur the same expenses on the underlying funds . The mutual funds held by any one TDF will often be a combination of several share classes: for example, the no – load class for some of its holdings and the investor class for other s.

    Table 4 presents the expense ratio for different classes of target date funds , as well as the breakdown of the expenses between the fees paid directly to the TDF and the fees paid to the

    6 Measuring heterogeneity of investment performance is the major thrus t of the paper by Balduzzi and Reuter (2013). Although this is not the major purpose of our paper, we find heterogeneity broadly consistent with their findings, even though the range we find for equity is smaller than that reported in the latter years of t heir sample.

    underlying funds. 7 The highest total fees are charged by the C class TDF. The second highest expenses are those on A shares. For A shares , the total expense ratio is 114 bp s , made up of 61 bp s of fees on the underlying funds overlaid by 53bp s of fees for the target date fund . Note that the A class shares may have loads and that the loads are not included in the expense calculations. The size of the load is a function of the size of the purchase and is often waived for large purchases.

    Examining Table 4 , we see some differences in the underlying fund expenses across different TDF share classes. This is due to the different sample of funds that offer each class. The major difference in total expenses is differences in the TDF fees paid across different TDF share classes. The investor class and the no – load class have the lowest TDF expenses and the lowest total expenses.

    The expenses of the underlying funds are low because t arget date funds often holdlow cost mutual fund classes not available to any investo r or only available to some investors. For example, 56% of the funds held by all TDFs are institutional class funds, 6.5% are retirement class funds, and 1 5.93% are master trusts.

    Table 5 shows for investors who qualified for A class shares (or alternati vely no load class shares ) how much the investorwould pay to hold the underlying funds directly if he or she duplicated the TDF with A class sharesor no load class shares . 8 For example, from Table 5 , an investor who only qualifies for A shares would have to duplicate the target date fund with A shares and would incur an expense ratio of 102bp s in doing so. Thus , an investor who could only

    7 We report two entries for retirement funds: the average and the maximum. Many funds offer a number of classes of retirement funds. They differ in whether the fund or the retirement plan handles some of the administration of the plan. Since we cannot determine how these costs are split, we report the maximum which for most funds means the administrative costs are borne by the mutual fund family.

    8 We limit this to A class and no – load class since these are the classes for which we can find un derlying funds of the same class for a meaningful number of TDFs.

    hold A shares is only paying an additional fee of 9.6bp s for the services provided by the TDF management. Likewise, we see that for an investor who could buy no – load shares, the additional charge is only 4 bp s . When we make this comparison , the additional charges from holding a target date fund are small. Target date funds provide access to low – cost classes of mutual funds and charge a fee at the TDF level to capture the advantage to investors who would have to buy a more expensive class of the underlying fund. Since TDFs predominantly or exclusively hold funds within the same fund family, the split of expenses depicted in Tabl e 5 between the TDF and the underlying funds is a matter of little consequence to the target date fund sponsors.

    One other aspect of expenses is worth examining. We generally expect the TDFs holdings to shift over time from holding funds that invest in mo re risky assets that have higher expense ratios to funds holdi ng less risky assets that have lower expense ratios . We examined what happens to expenses when a retirement is planned for 2025 , 2035, or 2045. The change in overall expenses is quite small, going from 1.119 for retirement in 2025 to 1.139 for retirement 10 years later to 1.174 for retirement 20 years later. The change in total expenses and the change in underlying expenses is consistent with the holding of funds becom ing less risky as retirement comes closer and bond f unds having lower expense ratio s than stock funds.

    3. Performance

    Managers of TDFs can improve performance by successfully timing deviations from the glide path , timing sector holdings , or selecting superior individual mutual funds . This section is divided into three parts.In the first part we examine how well the fund does in timing.In the second part we examine how well they have done in selecting assets.In the third part we examine

    9 The expenses reported in Table 5 differ from those in Table 4 because not all target date funds belong to families that offer A shares that matched the funds the TDF held. A shares have l oad fees. In this calculation we are assuming that the load on the underlying funds is the same as the TDFor that the investor purchases enough so the load fee is waived.

    as an overall measure of performance whether a si m ple strategy exists that outperform s TDFs . Note that in all three parts of this section we heavily rely on ha ving composition data for the TD Fs.While holding data are important throughout this section , it is crucial in measuring fund selection ability.Other a uthors have simply run time – series regression s of a TD F ‘ s return on a set of indexes to obtain alphas.The earlier analyses made it clear that TDFs change the weights of different assets in their portfolio over time.By using holdings data , we can estimate b etas that change over time , and we ge t a more accurate estimate of performance.

    3.1 Timing

    Target date funds have a glide path that specifies the stock – bond split over time. However, target date funds often deviate from the st a ted glide path because of beliefs about future returns on stocks and bonds.

    All target dates funds have a stated glide path. The glide path involves increasing the amount invested in bonds and decreasing the amount invested in stocks over time. Therefore , the manager’s bond – stock timing decision becomes how much to deviate from the glide path. For most funds , the glide path is presented pictorially in the prospectus. Examining the picture, it is impossible to accurately estimate the glide path numerically . Thus , we estim ate the glide path from the data . The picture s show that the glide path is linear or close to linear over the relevant range of years included in our analysis. Thus , we assume a linear glide path. For each target date fund , we calculate bot h the average proportion invested in stocks over all time periods and the average change in this proportion over each period in our sample history. To estimate the glide path for each fund , we use the average proportion it invested in stocks as the midpoint of its history and the average change in stock investment to calculate the glide path on other dates . To measure bond stock timing , we take the deviations from the glide path for stocks and bonds at the be ginning of each quarter and multiply each by the return on that investment class over the quarter. In equation form , this is


    To measure timing rather than the return on the particular funds held, we use returns on indexes to calculate returns from deviating from the glide path. For domestic stock , we used the Fama – French Market Index plus the riskless rate (since the Fama – French market index is return above the riskless rate). For international stock , we use the MSCI World Index ex – US. For domestic bonds , we use the Barclay’s U.S. Aggregate Bond Index. Finally, f or foreign bonds , we use the Bank of America Global Bond Index ex US. Since a glide path is the bond – stock split , and not how much is in domestic or international, we need a single stock and a single bond index. In constructing a single index for any fund, for computing stock returns , we use the weights for that fund at the beginning of each quarter in domestic stocks , as well as in international stocks with the sum scaled to 100% . We then multiply each by its return over the quarter. The bond return is calculated in a similar manner with cash , which is assumed to earn the Treasury bill rate as one of the bond components. Quarterly timing returns for each TDF are then accumulated to compute over all timing returns for the fund.Timing represents the difference between what the TDF would have earned by duplicating the bond stock mix of the TDF while investing in indexes a nd what the TDF would have earned if it followed its glide path and invested in indexes.In estimating a TDF’s bond stock mix , we only use Morningstar’s classifications divided into five categories. The average return due to timing across all funds is – 11.5 2 bp s per year with a t value of – 1.8. I f we pool all observations that weight funds with a longer history more heavily, the result is – 14.1 with a t of – 2.76. Thus . target date funds do not improve and may hurt their performance by having their stock – bond mix deviate from the glide path. 10 3.2

    Measuring fund selection

    The mo st common way to measure the asset selection ability of mutual funds is to compute the alpha from the historic time series of a fund ‘ s return s .Below , we describe a general model for doing so, explain why the standard way of estimating performance from such as model is not appropriate for target date funds, describe the method we use to estimate alpha, and p resent performance results : t he standard mode l for estimating performance is from a time – series regression of the type


    The use of time – series estimation is inappropriate for TDFs since by design the ir betas are meant to – and do – change over time due to allocation decisions across existing asset categories and the addition of new asset categories. In the case of changing asset weights , and changes in the

    10 We also examined timing with respect to domestic and international investment in both the stock and bond investment categories. Within the equity segment switching between domestic and international investments added 8 bps per year, while within the bond segment timing costs 16 bps per year. Neither is close to statistically significant.

    asset classes included in a por tfolio , the unconditional betas from a time – series regression would be completely mis estimate d and the computed alphas meaningless.To overcome this problem , we use the bottom – up approach of Elton, Gruber , and Blake (2011), to estimate betas and alpha for a TDF. Sin c e a portfolio’s alphas and betas are weighted averages of the assets that comprise it, we compute the monthly alpha on each fund the TDF holds and then use the proportions invested in each fund at the beginning of the month to compute the TDF’s mo nthly alpha. The importance of estimating time – varying betas a nd the effect on al phas has been established by Fe r s on and Schadt (1996) and Christopherson , Ferson , and Glassman ( 1998 ).While our methodology is different from theirs , the motivation is similar. We can directly measure changing betas because we have monthly holdings data.

    More specifically , we start with the three – year alphas computed every month using data ending with the month for which we are computing alpha . Then we calculate the one – month al pha for that month by taking three – year alphas and adding back the residual for the month in question. These are then cu mulated and averaged over the history of the TDF.

    While we have discussed the methodology to estimate a general model like Equation (2 ) , we have n o t clarified the indexes used on the right – hand side of Equation (2 ) .Since target date funds hold many different types of mutual funds with different characteristics, we need to use indexes appropriate for the fund in question. For stock funds , we use the Fama – French three – index model plus momentum. For bond funds we use, following Blake, Elton ,and Gruber (1993) , a three – index model consisting of a general bond index, a mortgage – backed index, and a high yield index , all in excess return form . For foreign bond funds, foreign stock funds, domestic real estate funds, foreign real estate funds, sector funds, country funds, commodity funds, emerging market stock funds , and emerging market bond funds, we use market indexes of the appropriate market , al l in excess return form . In cases of low R 2 with the indexes employed , we examine the holdings and classify the fund consistent with its holdings.In a number of cases , the funds’ holdings were not consistent with the Morningstar category. 11

    The average alp ha over the history across all t arget date funds is a negative 20bps per year and is significantly different from zero at the 1% level . This is the alpha across all the TDFs holding s and is after all expenses on the underlying funds but before the expenses added by theTDF .

    Most studies looking at alphas on the average mutual fund find that the average fund underperforms indexes by about 70 bps . Does this suggest that TDFs display superior sele ction ability , with respect to the funds they hold? The answer is no. E xamining the average expense ratios on the funds they hold (T a ble 4 ) shows expense ratios of about 60 bps . Most mutual fundstudies examine share classes of funds with average expense ratios of 110 to 120 bps . TDFs have better alphas on the funds they hold, primarily because they are able to hold share classes with low – expense ratios . If one adds the difference in expense ratios , the average alpha on the funds they hold is similar to the – 70 bps normally found in mutual fundstudies.

    As sta ted above the average alpha on the underlying funds does not take into consideration the expenses added by the target date fund itself. Investors in a target date fund pay total expenses equal to the sum of expenses imposed by the TDF and the expenses on t he underlying funds . Table 6 shows the average alpha , including the total expenses for each target date share class , as well as the percentage of fun ds in each share class that have negative alph as. We see that each class has o n average a negative alpha. The A class are the most commonly

    11 The three years used in estimating alpha were the three years ending in the month in question. If three years of data did not exist, we used the longest time frame we had, providing it was at least twelve months. If adequate data did not exist, the fund held by the TDF was excluded and the weights were rescaled.

    examined share classes. The target date funds class A shares have alphas consistent with the alphas of A shares of mutual funds in general; about minus 77 bps per year. The C class has mo re negat ive alphas of about 1.37 % per year. No load funds have negative alphas close r to zero (minus 27 bps per year), approximately the alphas for the lower cost index funds found in the market. The investor class is close to that of the no – load class (33 bps per y ear) . However, investor class shares are usually sold through invest ment advi sors, who add their own fees to those charged by the target date fund.

    The alpha on retirement accounts is slightly higher than the alpha on A shares. There is a range of alphas on retirement accounts that we believe is a function of whether the fund company, financial advisors , or the retirement account itself handles some of the administrative costs. Many fund families offer several subclasses of retirement shares. For example, John Hancock has R1, R2, R3, R4, R5 , and R6 shares with slightly different expense ratios. When we compare the expense ratio of the highest expense retirement share for each family (where the fund bears the administrative costs) with the average retirement share class , the returns go down , but only by 7 bps per year . There is little difference in expense ratios across different retirement share classes.

    4. An Overall Performance Measure

    While there are several possible measures of performance , a very practical measure is to see if an investor who followed a simple strategy would obtain better results by holding target date funds.We examine the mean return, standard deviation, and Sharpe ratio for the simple strategy and compare the m to the values For TDFs .

    The simple strategy assumes that an investor observes the first reported asset allocation that occurred at least three months after the start of each TDF and holds the same proportions 19 over the life of the fund in five categories of indexes: dom estic stock, international stock, domestic bonds, international bonds , and cash. For cash , we use the one – month Treasury bill rate, and for the other four categories , we use the indexes discussed earlier .

    Across all share classes and all funds , we find tha t in 75% of the cases the standard deviation of the returns from the naïve strategy is less than that of the TDF. Thus , the additional types of investment used by the TDF (e.g. , real estate commodities) do not seem to reduce risk.On average , the variance of the naive strategy was 12% less than the variance of the TDF. In addition , t he mean return on the naïve strategy was also higher in 75% of the cases , with a difference averaging 62 b ps per year.

    For each share class of TDFs , the Sharpe rati o is greater for the naïve strategy in the preponderance of cases, and the difference is statistically significant at the 0 .01 level for most classes. 13 Thus , an investor would be better off using a buy – and – hold strategy that invest s only in passive portfolios and using only domestic and international stocks and bonds and cash . 14

    1. Shareholder Objectives or Family Objectives

    There are a number of articles that have found that mutual fund managers make investment decisions that hurt individual mutual fun d performance but help fund family objectives.Cohen and Schmidt (2009) find that fund managers overweight a firm when the fund family is a trusteeof that firm’s 401(k) plan and increase their holdings in these firms when other

    12 We took our starting point three months after the TDF was started , because the first observation often conta ins an allocation before the fund is fully invested, for example, a very large cash position.In addition we assume d that investment in the ―otherâ€- category wa s allocated proportionally over the five categories named above.The ―other â€- category wa s generally well below 5% of assets , except in three cases not included in our sample.

    13 This result holds even when we subtract 15 bps, representing fees on low – cost index funds.
    14 We also compared the performance of TDFs before fees imposed by the TDFs with the per formance obtained by an investor investing in the actual funds held by the TDFs. We followed the same procedure described above. The Sharpe ratios of the replicating portfolio are higher than the TDF Sharpe ratios in 67% of the cases, and the average diffe rence is highly significant.

    mutual funds are decreasing t heir holdings.Davis and Kim (2007) show that mutual funds are not acting in their shareholders’ best interest in votes they make when doing substantial pension fund business with the firm.Sandhya (2011) shows that TDFs with the greatest potential for confl icts of interest have the poorest performance and concludes that they are adding higher cost funds or poor performing funds with large outflows.Bhattacharya, Lee , and Pool (2013) show that funds of funds increase their investment in individual funds in the ir fund family when the individual funds have large outflows.Gaspa r, Massa , and Matos (2006) and Cas a vecchia and Tiwari (2011) show how intrafamily trading benefits fund families at a cost to individual funds.Evans (2010) shows how fund families pursue their own objectives in setting fees and increasing fund offerings.

    What makes our analysis unique is given that we have data on the monthly holdings of individual TDFs , we can analyze hypotheses concerning management behavior by examining their actions directly rather than inferring their actions from overall return results.

    TDFs are particularly appropriate for studying potential agency problems between individual funds and fund families, because they primarily hold mutual funds of the fund family that sponsors the TDF. When a TDF invests in a fund outside the family , it does so almost always because a similar fund does not exist within the family. Across our sample , 69.9 % of the funds that were added by the TDF had at least one alternativ e fund in the f amily with the same Morningstar classification. We refer to these funds as the alternatives. The average number of alternatives was 3.8 , and the percentage of tim es there was more than one was 68.1% .

    In this section w e examine four variables that might sat isfy fund family objectives but are not necessarily objectives of the target date fundshareholders : 15

    1. start date : the family might want to help start – up funds ;
    2. management fee : higher – fee funds bring in more money to the family ;
    3. total net assets : management might select funds to include that are smaller than the alternatives to help these funds reach a scale at whi ch they are profitable ; and
    4. cash flow : the TDF might select funds that were losing assets or growing at a rate slower than alternatives to he lp a fund reach a size that is profitable .

    It would be in the interest of fund families to have TDFs invest in recently started funds to help start – up funds boost their ass et size to obtain economies of scale.We find that many target date funds do add an abnormal number of funds that have been in existence for a short period of time. There were 720 cases in which a TDF adding a fund had the option of selecting an alternative in the same fund family with the same Morningstar objective.In 15% of these cases, there was an opportunity to invest in a fund that had existed for three months or less.When the TDFs had this opportunity, 72% of the time they selectedthe fund that existed for less than three months; while if they selected randomly, they would have sele cted a short – lived fund 34% of the time.In 30% of the 720 cases, there was an opportunity to invest in a fund that existed one year or less.When a TDF had this opportunity , 57% of the time it selected the fund that existed less than one year; while if it selected randomly, a fund that existed less than one yearwould have been selected 34% of the time. H ow well have the start – up funds done relative to the alternative funds available in the same family? T he funds added that were in existence for three months or

    15 In all analyses in this section we only examine funds that were added by management that are in the TDF’s fund family.

    less had alphas over the next three years that were low er by 86 bps per year than the average three – year alphas on alternative funds. This is statistically significantly different from zero (t = 2.14).Management is clearly adding a disproportionate percent ag e of new funds that have three – year performance after addition that is inferior to the alternatives they could have added.

    The next variable we examine is management fees. If a specific manager is concerned with family objectives rather than investor obje ctives , s he would add funds with higher management fees than the alternative funds of the same type offered by the fund family. In fact , the average manager does not do so . 16 However , specific managers do.W hen a TDF manage r chooses funds that had a much higher management fee than the alternatives , we find that the funds selected had much lower future alphas than the alternatives.For example , there were thirteen funds for which the manager selected a fund whose fees were 40 bps or more higher than the alternativefunds ‘ . For the following three years , these funds had alphas 256 bps per year lower than the alternative funds , with a t of 2.4 .Examining the thirty – three additions for which the manager selec ted a fund in which fees were higher than alternative funds by 30bps or more had underperform ance compared with the alterna tives of 115bps per year with a t of 1.43.TDF managers on average do not seem to be adding funds that have higher management fees than alternative funds , but when a manager add s fund with much higher management fees than the management fees on alternative s in the same fund family , the funds that a re added have much lower performance than the alternatives .Some managers seem to be maximizing family objectives rather than shareholder objectives .

    16 This differs from Sandya’s (2011) conjecture. Her conjecture is based on the difference in performance of TDFs that invested in funds within their families compared with funds that invested outside of their families.

    If some fund managers were selecting funds in part because of family concerns rather than shareholder concerns, we would expect them to select more small funds than could be justified by future alpha.Since start – up funds are generally of small size, and since we have analyzed start – up funds earlier in this section , we eliminated all start – up funds in the first 6 months of their existence. 17 We then ranked all funds by size. When TDF management selected funds with less than 60 million dollars under management (26 funds) they earned, over the next three years, a monthly alpha 240 bps per year less than the alternatives (t = 2.09).We chose 60 million since the belief in the investment community is this is the minimum size to be profitable.We chose three other breakpoints : 100 million or less resulted in underperformance of 209 bps per year (t = 2.73), 150 million or less resulted in 144 bps underperformance (t = 2 .04) , and 200 million or less resulted in 116 bps underperFormance (t = 2.07).Selecting funds of small size is desirable from a family point of view , but it has hurt TDF performance.

    The last variable we examine is growth .If a fund had a large outflow, it would help the family if the TDF invested in these funds.We
    find no evidence of TDFs selecting funds with large outflows. 18

    We find that in pursuing a number of characteristics that serve fund family objectives, TDFs add funds with poor subsequent alpha relative to alternatives in the same family.These characteristics are new funds, funds with high management fee sand small funds . These findings support the previous literature that funds are managed in part to support fund family objectives rather than to support the individual fund objectives.

    17 Including these funds only strengthens the results reported below.

    18 This differs from Sandya (2011) and Bhattarchaya, Lee, and Pool (2013) . Although the sample in the latter article includes many types of fund of funds rather than just TDFs.


    Target date funds (TDF s ) have become an important vehicle for retirement plans: 72% of 401 (k) plans offer TDFs , and over 43 % of new 401(k) money of young employees is invested in them.Despite their importance for the financial health of future retirees, very litt le is known about their characteristics and performance.In this paper we address this lack of knowledge.
    Target date funds are usually thought of as holding a mix of debt and equity , while following a predetermined glide path, with the equity proportion d eclining over time.The reality is more complex.Currently , many target date funds hold commodity funds, domestic , and international real estate funds and funds holding the debt or equity of emerging markets. In addition, TDFs take active bets, deviating from their stock – bond glide path . We show that thi s active timing does not add value.

    Target date funds are funds of funds.As such , they add an additional fee to the fee charged by the mutual funds they hold.This additiona l fee can be quite high , averaging 53bps for A class shares.We show that this added fee is most ly offset by TDFs investing in low – expense classes of mutual funds not available to most investors.We find that the total fee an investor pays is not that much h igher than the investor would pay for the TDF portfolio by directly purchasing the share class available to that investor .

    On average , the performance of the funds selected by target date funds is better than those normally found in mutual fundstudies.Th is difference is due to the lower fees on the classes of shares TDFs hold.When the added fee s of target date fundsare taken into account, the performance of target date funds is similar to that normally found in mutual fundstudies.

    Target date funds almost always hold funds of the fund family to which they belong.Normally , a TDF only selects a fund outside the fund family when the family does n o t offer a similar fund (e.g. , commodity, international real estate).In the majority of cases , when the TDF adds a fund, the target date fund has alternatives with the same objective in the fund family.We show that some TDFs add funds that satisfy family objectives but hurt shareholder performance. This is manifested by TDF s selecting new start – ups, some managers selecting funds with much higher management fees than alternatives, and selecting small funds . All of these additions have lower alphas than the alternatives in the fund family








    Agnew, J.,P.Balduzzi, and A.Sunden.2003 .What do we do with our pension money? Recent evidence from 401( k ) plans.American Economic Review 93 : 193 — 205 .

    Ameriks, J.,and S.Zeldes.2004 .How do household portfolio shares vary with age? Working Paper, Columbia University.

    Balduzzi, P., and J.Reuter .2013.Heterogeneity in target date funds .Unpublished Manuscript, Boston College.

    Barrons Target .2014.Date funds take over .July 5 .

    Benartzi, S., and R.Thaler.2001.Naïve diversification strategies in retirement saving plans.American Economic Review 91: 78 — 98.

    Bhattacharya, U., J.Lee, and V.Poo l .2013 .Conflicting family values in mutual fund families.Journal of Finance 68: 173 — 200 .

    Blake, C., E.Elton, and M.Gruber.1993.The performance of bond mutual funds.Journal of Business 66: 371 — 403.

    Bodie, Z., R .Merton, and W.Samuelson.1991.Labor supply flexibility and portfolio choice in a lifecycle model.Journal of Economic Dynamics and Control 16: 427 — 49.

    Campbell, J., J.Cocco, F.Gomez, and P.Maenhout.2001.Investing retirement wealth: A life – cycle model.In Risk aspects of investment – based social security reform .Eds.J.Y.Campbell, and M.Feldstein.Chicago: Universit y of Chicago Press.

    Camp bell, J., and L.Viceir a.2002.Strategic asset allocation : Portfolio choices fo r long – term investors .New York: Oxford University Press.

    Casavecchia, L, and A.Tiwari.2011.Cross – trading and the cost of conflict of interest of mu tual fund advisors.Working Paper, University of Iowa.

    Christopherson, J., W.Ferson, and D.Glassman.1998.Conditional manager alphas on economic information another look at the persistence of performance.Review of Financial Studies 11:111 — 42.

    Cohen, L., and B.Schmidt .2009.Attracting flows by attracting big clients.Journal of Finance 64:2125 — 51.

    Cocco, J., F.Gomes, and P.Maenhout.2005.Consumption and portfolio choice over the life cycle.Review of Financial Studies 18: 401 — 533.

    Davis, G., and H .Kim.2007.Business ties and proxy voting by mutual funds.Journal of Financial Economics 85:552 — 70.

    Elton, E., M.Gruber, and C.Blake.2007.Participant reaction and the performance of funds offered by 401( k ) plans.Journal of Financial Intermediation 16: 240 — 71.

    — .H oldings data, security returns and the sel ection of superior mutual funds.Journal of Financial and Quantitative Analysis 46:341 — 67.

    Evans, R.2010.Mutual fund incubation.Journal of Finance 65: 1581 — 611.

    Ferson, W., and R.Schadt.1996.Measuring fund strategy and performance in changing economic conditions.Journal of Finance 51:425 — 61.

    Gaspar, J., M.Massa, and P.Matos.2006.Favoritism in mutual fund families? Evidence on strategic cross – fund subsidization.Journal of Finance 61:73 — 104 .

    Huberman, G., and Sengmuller.2004.Company stock in 401( k ) plans.Review of Finance 8:403 — 43.

    Liang, N ., and S.Weisbenner.2006 .Investor behavior and the purchase of company stock in 401( k ) plan design.Journal of Public Economics 90: 1315 — 46.

    Madr ian, B., and D.Shea .2001.The power of suggestion: inertia in 401( k ) participants’ savings.Quarterly Journal of Economics 116 : 1149 — 87.

    Merton, R.1969.Lifetime portfolio selection under uncertainty: the continuous time case.Review of Economics and Statistics 51: 247 — 57

    Poterba, J., J.Rauh, S.Venti, and D.Wise.2005.Utility evaluation of risk in retirement savings accounts.In Analyses in the economics of aging .Ed.David Wise.Chicago: University of Chicago Press.

    — .2 009.Lifecycle asset allocation strategies and the distribution of 401( k ) retirement wealth.In Developments in the economics of aging, 333 — 79 .Ed.David Wise.Chicago: University of Chicago Press.

    Samuelson, P.1963.Risk and uncertainty: The fallacy of the law of large numbers.Scienti a 98: 108 — 13.

    Sandhya, V.201 1 .Agency problems in target date funds.Working Paper, George State University.

    Shiller, R.2005 .Lifecycle portfolios as government policy.Economists Voice 2:1 — 9.

    VanDerhei, J., S.Holden, L.Alonso, and S.Bass.2012 .401( k ) plan asset allocation, account balances, and loan activity in 2011.Employee
    Benefit Research Institute, No.380 .

    — .2013.401( k ) plan asset allocation, account balances, and loan activity in 2012 .Employe e Benefit Research Institute, No.394

  8. Tarun Chordia Slides

    Cross-Sectional Asset Pricing with Individual Stocks: Betas versus Characteristics

    Tarun Chordia, Amit Goyal, and Jay Shanken

    Main question

    -Are expected returns related to
       – Risk/betas, OR
       – Characteristics

    -If both, which is more important?

    How to answer?

    -Use portfolios
      -Helps mitigate EIV problem
        –Fama and MacBeth (1973)
        –Less efficient Ang, Liu, and Schwarz (2010)
        –Method of grouping is important Lewellen, Nagel, and Shanken (2010)

    -Use individual securities
      -But, EIV problem

    What we do

    -Use individual securities

    -Correct for EIV bias
      -Litzenberger and Ramaswamy (1979), Shanken (1992), Kim (1995)

    -Allow betas to change over time
      -Two year rolling regressions

    -Quantify contribution of betas vs characteristics in explaining the cross-section of returns





    Intuition …


    Relative importance





    -All common stocks on NYSE, AMEX, and NASDAQ
    -Sample: July 1963 to December 2013
    -Price greater than $1 (for CSR)
    -Sample of all stocks and non-microcap stocks (>20% NYSE percentile)
      -Monthly average of over 3000 stocks with about 1500 non-microcap stocks
    -Fama-French (2014) five factor model plus Momentum
      -Mkt, SMB, HML, RMW, CMA
    -Characteristics include size, B/M, past six month return (exclude last month), operating profitability and investment
      -Assumed to be available 6 months after fiscal year-end
      -Winsorized at the 99th and 1st percentiles







    Time variation in risk premia





    -Reject all factor models
      -Rejection not news

    -Characteristics more important than betas

    -Risk premiums
      -Negative on SMB
      -Positive on RMW and CMA
      -No premium on HML or MOM
      -Less robust positive premium on Mkt

  9. Bryan Kelly Paper

    Excess Volatility: Beyond Discount Rates*

    Stefano Giglio Bryan Kelly

    University of Chicago and NBER

    March 4, 2015

    We document a form of excess volatility that is irreconcilable with standard models of prices, and in particular
    cannot be explained by variation in the discount rates of rational agents. We compare behavior of prices to claims on the same stream of cash ows but with different maturities. Prices of long-maturity claims are dramatically more variable than justified by the behavior of short maturity claims. Our analysis suggests that investors pervasively violate the \law of iterated values.” The violations that we document are highly significant both statistically and economically, and are evident in all asset classes we study, including equity options, credit default swaps, volatility swaps, interest rate swaps, ination swaps, and dividend futures.

    *We are grateful to Drew Creal, Lloyd Han, Lars Hansen, and Stavros Panageas for many insightful.

    1 Introduction

    The field of modern financial economics is in large part organized around the notion of excess volatility in asset prices. As Shiller (1981) and others famously document, price uctuations are \excessive” relative to predictions from the constant discount rate model. A potential resolution of the puzzle is to recognize that discount rates are variable. The leading frameworks of modern finance indeed center on descriptions of discount rate variation in models of rational expectations.

    In this paper we document a form of excess volatility that is irreconcilable with standard models of prices, and that in particular cannot be explained by variation in the discount rates of rational agents. We compare behavior of prices to claims on the same stream of cash ows but with different maturities. Our analysis suggests that investors pervasively violate the “law of iterated values” (Anderson, Hansen, and Sargent (2003)). That is, the law of iterated expectations dictates that prices of long maturity claims reect investors’
    expectations about the future value of short maturity claims. This imposes consistency requirements on the joint behavior of prices across the term structure.

    We document an internal inconsistency in the price behavior of short and long maturity claims. In particular, prices on the long end of the curve are dramatically more variable than justified by the behavior of the short end. These violations are highly significant both statistically and economically. Excess volatility of long maturity prices is evident in all asset classes we study, including claims to equity volatility, sovereign and corporate credit default risk, interest rates, in ation, and corporate dividends.

    A simple example illustrates the essence of our approach. Consider an asset that yields an uncertain cash coupon of
    xt each period. At time t, an n-maturity claim receives the cash flow xt+n in period n. In the absence of arbitrage, the prices of claims across all maturities are coupled by the dynamics of xt.
    No-arbitrage implies the existence of a pricing measure,Q, that subsumes all risk pre-mia, and their potentially time-varying dynamics, by construction. Under Q, prices are expectations of future cash flows:


    The convenience of representing prices as Q-expectations is that any excess volatility that we find in prices cannot
    be due to a time-varying risk premium, by definition. If we can
    1For exposition we assume here that the risk free rt is 0, or, alternatively, that all cash flows in the contract are exchanged at maturity, as in the case of swaps. Later sections address the role of time varying interest rates in detail.


    Importantly, a failure of the Q-dynamics extracted from the short end of the curve to explain prices at the long end of the curve can be directly linked to a violation of the law of iterated values. Under Q, iterated expectations bind together prices of claims across maturities. For example, the price of the n-period claim is coupled with the price of 1-period claim:


    A violation of this equation { for example, by inconsistent variances of the left-hand and right-hand sides { constitutes a failure of the law of iterated values.

    We develop a general methodology for measuring and testing excess volatility that re- quires minimal modeling assumptions and exploits the information contained in the term structure of cash flow claims on any asset. Our methodology extends the preceding example to any setting in which the cash- flow variable xtfollows a factor structure with linear dy-namics under Q and an arbitrary number of factors. This assumption is valid in standard term structure models, where it is typically derived from a linear factor structure under the physical measure together with an affine stochastic discount factor. It also describes many general-equilibrium asset pricing models, such as the long-run risks model of Bansal and Yaron (2004), the rare disaster model of Wachter (2013), and the cash flow duration model of Lettau and Wachter (2006). Furthermore, our setting allows for time-varying risk prices and stochastic volatility.

    Our tests of excess volatility are remarkable for what they do not require. We require no data other than asset prices, obviating the need for data on the underlying cash flows xt . Nor do we require a model of discount rates. By definition, the behavior of xt under the pricing measure implicitly captures all relevant discount rate variation. In our approach, any surplus (or deficit) of pnt variance relative to its predicted value already accounts for discount rate effects. Finally, we require a minimal amount of time series information (used to compute the covariances of prices at different maturities). It is well known from the term structure literature that the dynamics under Q are precisely estimated with short time series via cross-sectional regressions of prices along the term structure. We leverage this fact and focus exclusively on Q dynamics, avoiding the dificulties of estimating physical dynamics and stochastic discount factors.

    We consider a number of potential explanations for the excess volatility of long maturity claims such as liquidity and omitted factors. The evidence is inconsistent with both of these explanations. For most asset classes, we have detailed liquidity information across the term structure, we study only those maturities that transact regularly each day, and we demonstrate that observed price volatility is unassociated with bid-ask bounce or stale prices. Nor are omitted factors likely to explain our findings. First, parsimonious linear factor structures provide an extremely accurate description of every term structure that we study, with R2 values generally exceeding 98% based on one to three factors. Second, we project long maturity prices onto the short end of the curve, and show that even the variance of the projected prices (which are 100% explained by short prices) remain highly significantly excessive. Finally, we perform a range of robustness tests allowing for richer factor structures, and our findings remain essentially unchanged.

    Our paper is related to Stein (1989) in terms of economic intuition and implementation, who studies the pricing of volatility of the S&P 100 using the term structure of implied volatility of options. Stein compares ΡQ with the persistence of volatility under the physical measure Ρ. He finds that ΡQ> Ρ , and interprets it as evidence of overreaction. We build on Stein (1989) in three ways. First and foremost, we do not compare physical and risk- neutral dynamics of the underlying cash flow process (ΡandΡQ). This is a conceptually crucial difference. As the term structure literature (developed largely subsequent to Stein’s analysis) has pointed out, ΡQ will generally be different from Ρ in the presence of time-varying discount rates, even in a rational model. In fact, in standard affine models ΡQ-Ρ exactly measures the time-varying component of risk premia. In other words, time varying discount rates provide a natural potential explanation for Stein’s facts, in the same way that discount rate variation helps resolve Shiller’s original excess volatility puzzle.

    In contrast, we use only information in the cross-section of prices, solely focusing on dynamics under the pricing measure. In essence, we compare the estimates of ΡQ from the short end of the curve to ΡQ implied by the long end. Because our analysis is entirely conducted under the pricing measure, time-variation in risk premia cannot play a role in our analysis. Any overreaction (excess volatility) that we find is not mechanically resolved by a time-varying discount rate explanation. The second contribution of our paper is to show that this excess volatility phenomenon is not merely a feature of the options market. It holds across diverse asset classes and across countries. Finally, we propose a general methodology that allows for an arbitrary factor structure (as opposed to the single factor model in Stein) and derive a test statistic for excess volatility.

    In Section 2 we present our general modeling framework, of which the preceding one- factor example is a special case. We present our approach to estimation and inference in Section 3. In Section 4 documents the our central empirical facts for excess volatility across many asset classes. Section 5 discusses a number of extensions and Section 6 concludes.

    2 Affine Term Structures and Pricing Under Q

    In this section we specify the dynamic structure of the economy under a probability mea- sure denoted Q , the pricing measure. 2 The state space under Q is described by a vector autoregressive process for the factors, Ft :


    The intercept is an inconsequential constant function of remaining model parameters that drops out from all variance calculations. Note also that no risk-free rate adjustment appears in equation (4), a point we discuss below. All assumptions are discussed brie y in the next section and in detail, asset class by asset class, in the appendix.

    How variable are prices in this model? What drives the comovement across claims of different maturity? Price uctuations for all claims to xt are entirely driven by uctuations in the state factors, but they differ with maturity depending on the state persistence matrix,

    2 The pricing measure is a transformation of the objective statistical measure that scales physical prob- abilities by investors’ marginal utilities state-by-state. This carries the implication that asset prices are martingales under Q , a feature that we exploit in our development. Such a measure is guaranteed to exist under the minimal assumption of no-arbitrage (Harrison and Kreps (1979), Harrison and Pliska (1981)).

    ΡQ . Consider the stationary case in which all eigenvalue of ΡQ are strictly less than one in modulus. At the short end, the claim receives a single cash ow, and has sensitivity to state uctuations given byδ’1(I+ΡQ). As the maturity rises to n , the claim receives a total of n cash ows. Because xt is persistent, this claim is more sensitive to the state, represented by the additional ΡQ term in (12). The increasing powers of ΡQ also indicate that more distant cash ows matter less for price uctuations today. Thus, price volatility is an increasing but concave function of maturity. Moreover, the model structure places restrictions on the exact price variances that are admissible. If there are K factors, but n > K maturities, then the variance of any n – K prices in the term structure are entirely pinned down by the other K price variances, through ΡQ.

    2.1 Discussion of the main assumptions

    Given that the results of the paper are derived under the assumptions about the term structure described above, it is important to underline which are the key assumptions in that setup, and which assumptions play instead a minor role.
    The two fundamental assumptions of the paper are:
    1. Cash flows xt follow a factor structure, and
    2. Factors obey linear dynamics.

    The first assumption of the model can be easily verified in the data. The term structure model we described above has an extraordinarily high degree of explanatory power for asset prices in a wide variety of asset classes. For traded claims with a term structure of maturities, we typically find that a small number of latent factors (typically one to three, many fewer than the number of traded maturities) explains close to 100% of the price variation throughout the term structure. The most well known example is the US treasury bond term structure, though we find the same feature among derivatives in credit, equity, currency, and other markets. This strongly supports the assumption of a factor structure.

    The second main assumption of the paper is that state dynamics are linear under Q . The vast majority of models in the asset pricing literature assume linear dynamics for the underlying state variables, both in the structural general equilibrium asset pricing models and in the reduced-form term structure literature. In addition, the Wold decomposition applies in our setting as long as the factors are stationary under Q (stationarity is typically assumed in term structure models and is supported by our empirical evidence). This implies that state dynamics can be represented as a vector moving average process (with possible infinite lags), for which our exible finite-order VAR can be viewed as an approximation. Finally, Le, Singleton and Dai (2010) discuss conditions for a model to posses a linear affine structure under Q even when objective dynamics are non-affine.

    The specification above also imposes some other, non-critical assumptions that we brie y discuss here and explore more in detail in the Appendix.

    Exponentially affine form

    . We assumed above that the term structures we consider have prices that depend exponentially on the cash flows xt . This is a natural assumption for some asset classes, such as bonds, in which the prices are exponentially affine functions of xt = -rtIn other term structures, payoffs are linear in the cash ow variable:



    3 Estimation and Testing

    In this section we develop a methodology for testing excess volatility across the term structure given the model specification of the Section 2. We start by discussing parameter identification for factor dynamics under
    Q . We then show how to infer dynamics under Q from the comovement of prices at the short end of the term structure. Finally, we propose a test for \excess volatility” of prices at the long end of the term structure.




    Note that each element i of ΡQ needs to satisfy this equation: ΡQ can therefore be computed by finding the roots of this polynomial equation. This structure has the convenient feature that we can estimate state dynamics from the yields without any maximization (as is typical in term structure models).

    One consideration is that there will generally be nH+1 roots of this polynomial (some of them complex), while we only seek K parameters. This equation shows that the Q- measure dynamics and the comovements of prices only identify the eigenvalues of ΡQ up to the set of roots of this polynomial. It does not tell us which roots to choose, as they imply the same covariance among prices (while a full MLE procedure that exploits both information about the P and the Q dynamics will be able to choose among them). We use the following selection procedure. First, we only consider non-explosive roots. This is motivated by the unambiguous empirical fact that price variances are concave in maturity for all the markets we study, especially at the short end of the curve where our estimation is coming from. If prices rise less than linearly with horizon, the system is best described by stationary dynamics. Second, among the non-explosive roots, we select the K most persistent ones. This ensures that our excess volatility findings will be the most conservative (they will suggest the least excess volatility) of all of the covariance-equivalent roots we could have reported. Finally, we choose real roots whenever possible, since complex roots imply regression coefficients of prices on the factors pt at maturities above nH+1 that display cycles across maturities, an implication that is strongly counterfactual.




    4 Empirical Findings
    4.1 Term structures across asset classes: data and models

    In this section we briey describe the asset classes for which we test for excess volatility. In each case, the pricing and factor structure described in section XXX applies with small modifications, all of which are reviewed in detail in the Appendix. We also leave for the appendix a more in-depth description of the data.

    4.1.1 Interest Rates

    US government bond prices are among the most well studied data in all of economics. Our data US bond data comes from Guryanak, Sack and Wright (2006). The data consists of zero-coupon nominal rates with maturities of 1 to 30 years for the period 1985 to 2014, and is available at the daily frequency. The term structure is bootstrapped from coupon bonds and uses only interpolation, not extrapolation (so that a maturity will only be present if enough coupon bonds are available for interpolation at that maturity). Given the high liquidity of the Treasuries market, we use all available maturities starting in 1985.

    4.1.2 Credit default swaps

    Credit default swaps (CDS) are the primary security used to trade and hedging default risk of corporations and sovereigns. As of December 2014, the notional value of single-name CDS outstanding was $10.8 trillion. Our CDS data is from MarkIt. The CDS data includes maturities of 1, 3, 5, 7, 10, 15, 20, and 30 years for the period 2001 to 2013 and is available at the daily frequency. While not all maturities are equally traded, we focus on the most liquid single-name and sovereign CDS. It is useful to remember that our test will be conducted and reported separately at all available maturities; so it will be easy to assess to what extent the results are driven by maturities for which liquidity is high or low.

    Among the different CDS contracts written on the same reference entity, we choose those with highest liquidity. In particular, we choose CDS written on senior bonds, with modified- restructuring (MR) clause, and denominated in US dollars.4 Since there was little CDS activity before the financial crisis and most of these contracts had low liquidity, we focus on the period from January 2007 onwards. We choose the three most traded sovereigns (Italy, Brazil, Russia) and the three most traded corporates (JP Morgan, Morgan Stanley, Bank of America) during 20085 , a year in which CDS trading volume and CDS spread variability were particularly high. The Appendix describes how CDS prices can be represented in the framework of Section XXX.

    4.1.3 Inflation swaps

    We obtain in action swaps data from JP Morgan. We observe the full term structure between 1 and 30 years, at the daily frequency, between 2004 and 2014. As reported in Fleming and Sporn (2013), \Despite a low level of activity and its over-the-counter nature, the U.S. in action swap market is reasonably liquid and transparent. That is, transaction prices for this market are quite close to widely available end-of-day quoted prices, and realized bid- ask spreads are modest.”. In addition, there is significant trading volume at all maturities, including the very top ones (20 to 30 years).

    4 For sovereigns, we use contracts with the CR clause, as more data is available than for the MR contracts.

    5 See Fitch (2009).

    The term structure model for in action swaps follows closely the benchmark of Section XXX. We report the details in the appendix.

    4.1.4 Variance claims

    Markets for financial volatility, including options and variance swap markets, possess a rich term structure of claims. These markets allow participants to trade and hedge the price volatility of effectively any financial security, including equity indices and individual stocks, currencies, government and corporate bonds, etc.

    The first market we study is that for variance swaps on the S&P 500 index, claims to future realized variance (the sum of squared daily returns of the index). The price of a variance swap corresponds on the expectation (under Q) of future realized variance:


    and therefore it fits directly into our framework (with the only difference that the price depends on the payo variable xt in levels, not in logs). As discussed in detail in Dew- Becker etal. (2015), the variance swap market is an over-the-counter market with a total outstanding notional of around $4bn vega at the end of 2013 (meaning that a movement of one point in volatility would result in $4bn changing hands). More importantly, the price of a variance swap is anchored to the price of a syntetic swap that can be constructed from option prices. Dew-Becker etal. (2015) show that the term structure of variance swap prices matches very closely the term structure of synthetic claims constructed from option (typically known as the VIX).

    We also the term structure of at-the-money implied volatilities extracted from options in a variety of asset classes. In theory, synthetic prices of variance swaps (which follow exactly equation XXX) can be constructed in any market by combining the prices of puts and calls at different strikes: the price of these synthetic portfolio, commonly known as the VIX, is tied by arbitrage to the price of a variance swap. This would theoretically allow us to study the term structure of variance swap prices in any markets in which we can observe put and call prices, even if actual variance swaps are not trade. Unfortunately, for many asset classes not enough strikes are available to reliably construct the term structure of the VIX. We therefore rely on at-the-money (ATM) implied variances as a proxy for the VIX. This brings us closer to the original setup of Stein (1989), who was working with ATM implied volatilities, and is in part justified by the observation that ATM implied volatilities correspond { up to a first- order approximation { to the prices of claims to realized volatility (√RV), as demonstrated by Carr and Lee (2009). Using ATM implied variances allows us to study a large number of markets.
    In addition to variance swap on the S&P 500, we study the term structure of ATM implied volatilities for the most liquid options available in OptionMetrics: three domestic indices (S&P 500, Nasdaq, Dow Jones), three international indices (Stoxx 50, FTSE, DAX), three individual names (Apple, Citigroup, IBM), and three currency options,

    4.1.5 Currencies

    Currency forwards are the primary contracts (along with currency swaps) used to trade and hedge exchange rate risk. As of December 2014, the notional value outstanding of currency forwards and swaps was over $60 trillion. Our currency data is from JPMorgan Dataquery. We study six different currencies (versus the US dollar). For four of these we have maturities of 1, 3, 6, 9, and 12 months. For the Euro and the Mexican Peso we have maturities up to 15 years. Some data are available from 1996 to 2014, and all series have data at least as far back as 2002. We have daily data for all currencies.

    We also study currency options, which allow investors to trade and hedge exchange rate volatility. Our data are from JPMorgan Dataquery and have maturities of XXX for the period 1990 to 2014 at the monthly frequency. These data focus on the term structure of Black-Scholes implied volatility.

    4.1.6 Dividend Claims

    We obtain Stoxx 50 dividend futures prices from Bloomberg and Eurex (using the latter prices whenever available). We obtain
    weekly series from May 2009 to January 2015 (we use weekly data to reduce the impact of noise). Dividend futures data are obtained by interpolating contracts with fixed expiration dates in the December of each year. Since part of the dividends expiring in the first year are already accrued at the time of the transaction, we exclude the first maturity from our analysis. We therefore obtain contracts of maturity 2 to 7 years. Finally, we adjust all contracts by the risk-free rate to obtain spot prices rather than futures. This step is useful in comparing the prices of the dividend strips to those of the stock market.

    The small set of maturities available does not allow us to really compare the short and the long end of the curve. However, in the case of dividends we actually observe the price of a claim to the whole infinite-horizon set of dividends: the Stoxx 50 itself. We can therefore perform the following exercise: we extract the ΡQ matrix using the time series of dividend strips, and compare them with the volatility of the price of the stock market.


    4.2 Implementation

    Before describing the empirical results, we discuss here the procedure we used to identify the number of factors K , the number of prices at the short end of the curve that are used to extract ΡQ .

    To choose the number of factors K , we use the panel of prices for all N available maturities to calculate the number of principal components necessary to explain at least 99% of the variance in the panel. This serves as our estimate of K .

    We choose H (the prices that define the \short-end” of the term structure) for each asset based on the available maturities at which that cash ow is traded. In particular, for term structures for which claims are typically traded (and data are available) up to 24 months, we define the short end of the curve as composed of maturities up to 6 months. For term structure where the maturities traded are as high as 10 years, we define the short end as 3 years. Finally, for term structures that extend up to 30 years, we define the short and as maturities up to 5 years. The main theoretical justification for linking the definition of the \short end” of the term structure to the set of maturities traded is the following. The set of maturities n1;:::;nN we observe to actually trade presumably span the maturities for which investors believe there would be significant price variation. Therefore, we define the short end of the term structure relative to the set of maturities the investors choose to trade.

    4.3 Main Results

    We find pervasive evidence of excess volatility in each of the term structures we study. Our main findings are summarized in Figure 1. Clockwise from the upper left, the panels present results for claims to S&P 500 volatility (via options), Spanish sovereign default risk (via CDS), USD/GBP implied volatility and US treasury bonds.
    Each figure reports variances of prices across a different term structure term structure. The x-axis shows the maturity of claims. The y-axis on the left side shows the volatility (in standard deviation terms) of prices, while the right axis reports the variance ratio.

    The solid thin line in the figure reports the standard deviation of log prices at each maturity. Note that the total volatility at each maturity is a concave function of maturity. This is a first indication that the dynamics under the pricing measure Q cannot be explosive (in that case, the volatility would be an increasing and convex function of maturity).

    The solid thick line reports the standard deviation of the component of log prices ex- plained by the covariance at the short end, √V(pnt)unrestricted . The difference between the two solid lines is the part of movements of prices that cannot be explained by comovement with the factors extracted from the short end of the curve: measurement error unt . Note that the factors extracted from the short end of the curve have an extremely high explanation power for every maturity along the curve, with R2 close to 100% even at the very long end of the curve: the unrestricted factor model fits the whole term structure extremely well.

    The dashed line reports the price volatility that the Q dynamics extracted from the short end of the curve imply at all maturities: √V(pnt)restricted . Note how in all cases the model- implied volatilities increase with maturity at a much slower rate than the actual volatilities of prices. This is precisely what we mean by \excess volatility”. The volatility of prices (and the comovement between the long and the short end of the curve) is entirely driven by ΡQ . However, the prices at the long end of the curve react to shocks to the factors driving the short end much more strongly than those dynamics would imply: long-term prices overreact to movements in the factors.

    The shaded area encloses the 95th and 5th percentiles of the distribution of the volatility under the null of the model (obtained from a bootstrap procedure described in the Appendix). Under a one-sided test of the variance ratio, we reject the null of no overreaction whenever the think solid line (\total explained variance”) lies above the shaded area. In all the cases reported here, we find strongly significant evidence of excess volatility. Finally, two things are important to note. First, the excess volatility we find cannot be explained by movements in discount rates. Under the Q measure, all prices should just be equal to the expectation of future cash ows. Given the factor model we estimate, this expectation is fully pinned down by ΡQ (and the current level of the factors), and no other variation in expected cash flows or discount rates can affect prices. Second, the results of our paper are not about the fit of the factor model. The fact





    4.3.2 Dividend strips

    For the case of dividend claims we perform a slightly different analysis, since we compare the term structure of dividend strips with the price of the entire stock market. We therefore report all the results in this section.
    The term structure of log price-dividend ratios of dividend strips (pdnt ,n= 2:::7) has a strong factor structure. The first principal component explains 97.5% of the total variation, and the first two component explain 99.3%. We therefore employ a two-factor model. Given the limited number of maturities available, we extract the two factors using all available maturities.



    We therefore document a significant violation of the law of iterated values when comparing the dividend strip curve and the infinite-claim. Interestingly, it points to a lower volatility of the stock market claim relative to the volatility of the corresponding-duration claim in the dividend strips market.

    We believe there are two possible interpretation for this surprising result. First, it may reect different factors driving the dividend market and the infinite claim, for exmaple frictions or tax effects (see Boguth et al. ). If the two markets are not well integrated, and respond to different factors, it can look to the observer as a violation of the law of iterated expectations.

    A second, maybe more interesting, possibility, is that given that the dividend strips market is actually a derivative market of the much-larger market for the stock market, what we are documenting here is overreaction and excess volatility in this market relative to the movements of the entire stock market. This may explain why dividend strips have been found to be extremely volatile (for example by Binsbergen et al.), though naturally is not enough to explain the declining term structure of Sharpe ratios across maturities documented by Binsbergen et al.

    5 Robustness

    Key to the methodology of the paper is extracting ΡQ from the short end of the term structure and comparing the predicted and actual variance of prices of higher maturities. To do so, in our main analysis we extracted K factors from the short end of the curve, where K was chosen as the number of factors that explain 99% of the variation of the whole term structure, and the \short end” was defined to be the composed of the K shortest maturities available. In this section we consider several robustness tests. All the results are reported in the Appendix Figures.

    First, we extract K factors by looking at more than K maturities at the short end of the curve. We follow the methodology presented in Section 2 to extract the K principal components out of the first H > K maturities. In the appendix we consider H = K + 1 and H = K + 2 maturities. Doing so has two effects. On the one hand, it potentially reduces measurement error in extracting the factors from the short end of the curve. On the other hand, it increases the maturities used as a \short end” of the curve, in some cases getting very close to the \long end” of the curve. Expanding the maturities used H therefore makes our test less sharp, because it often uses much longer maturities. While in most cases doing so results in a lower estimate of excess volatility, for the vast majority of cases the results are still strong and statistically significant.

    In a third robustness test, we set again H = K as in the main text, but now require enough factors K to explain 99.9% of the variation in the term structure. This greatly increases the number of factors required, and again, using longer maturities, makes our test less sharp. Except in a couple of cases, all of our results still hold.
    < p>Finally, we report a version of our main results where we use asymptotic standard errors (instead of bootstrapping them).

    6 Conclusion

    We document excess volatility and a violation of the law of iterated values in a large cross- section of asset classes. Our test of excess volatility exploits the overidentification restrictions offered by observing a term structure of claims on the same cash flows.

    We use the short end of the term structure to learn investor’s implied dynamics of cash flows under the pricing measure Q . This gives us a model of expectations under Q at all maturities, which are linked by the law of iterated expectations and the implied dynamics of the factors driving the cash flows. We find that prices of long-maturity claims are dramatically more variable than justified by the behavior of short maturity claims. This excess volatility cannot be explained by time variation in discount rates, because that is already accounted for by the risk-neutral expectations we extract from the short end of the term structure. Our results therefore show that the excess volatility puzzle first highlighted by Shiller (1982) cannot be fully accounted for via rational variation in discount rates.

    A Appendix
    A.0.3 The Origins of Q, Heteroskedasticity, and Other Considerations

    Before moving to other asset classes, we take advantage of the well-researched interest rate setting to discuss a few considerations in more detail. We begin with a discussion of the pricing measure, Q .

    Up to now, we have assumed the absence of arbitrage, which ensures the existence of a measure Q under which prices are given by equation (11). Q-measure event probabilities are distorted versions of the objective probability measure, P , that is observed by the econometrician and that describes the evolution of cash flows. The distortion of Q has a natural economic interpretation. In particular, the Q -probability of a given state of the world arises


























  10. Kenneth D Slides

    The Equilibrium Real Funds Rate:Past, Present and Future

    James D. Hamilton, U.C. San Diego and NBER
    Ethan S. Harris, Bank of America Merrill Lynch
    Jan Hatzius, Goldman Sachs
    Kenneth D. West, U. Wisconsin and NBER

    October 2015

    I. Introduction

    There is a consensus that we’re heading towards a “new neutral:” an era of lower equilibrium real Fed funds rate.

    -Bond market
    -Summary of Economic Projections for the FOMC

    I. Introduction

    There is a consensus that we’re heading towards a “new neutral:” an era of lower equilibrium real Fed funds rate.
    “We may well need, in the years ahead, to think about how we manage an economy in which the zero nominal interest rate is a chronic and systemic inhibitor of economy activity, holding our economies back below their potential.” (Summers (2013b))
    -Bond market
    -Summary of Economic Projections for the FOMC

    I. Introduction

    There is a consensus that we’re heading towards a “new neutral:” an era of lower equilibrium real Fed funds rate.
    -Bond market
      -Nominal funds rate forecast to peak below 2%
    -Summary of Economic Projections for the FOMC

    I. Introduction

    -There is a consensus that we’re heading towards a “new neutral:” an era of lower equilibrium real Fed funds rate.
      -Bond market
      -Summary of Economic Projections for the FOMC

    Our goals

    -Analyze the past behavior of the real rate, to help form an idea of the prospective equilibrium value of the real rate: the forecast of the real rate 5 or 10 or 12 years from now.
    -This will require initial focus on output growth and the equilibrium rate, since conventional wisdom posits a tight link between the two:
      -Theory: consumption growth is tied to the real rate. So on a balanced growth path, trend output growth also tied to the trend (or equilibrium) real rate (e.g., New Keynesian models).
      -Empirically: Laubach and Williams (2003) make trend output growth the central determinant of the equilibrium rate.


    -The equilibrium rate is
      -hard to pin down;
      -highly variable;
      -has many determinants
        -trend output growth plays no special role.
    -A vector error correction model that looks only to U.S. and world real rates well captures the behavior of U.S. real rates.
    -Looking forward, a plausible range for the equilibrium rate is perhaps: a little above 0 up to 2%.
    -A standard model and loss function suggest: uncertainty about equilibrium rate | Fed should prefer later and steeper normalization of the Fed funds rate.


    -“Equilibrium rate”: real safe rate consistent with full employment and stable inflation. Equivalent to:
      -steady state real rate, and
      -forecast of the real rate 5 or 10 or 12 years from now.
    -We make no attempt at structural estimation.
    -Instead we use rolling averages of time series on the real short term government debt (real Fed funds for Post-World War II U.S.).
    -Much of our argument is informal.

    I. Introduction
    II. Construction of ex-ante real rates
    III. The real rate, consumption growth and aggregate growth
    IV. The real rate and aggregate growth: empirical analysis
    V. Narrative evidence on real rates in the U.S.
    VI. Long run tendencies of the real rate
    VII. Monetary policy implications of uncertainty
    VIII. Conclusion

    II. Construction of ex-ante real rates

    -Focus is on the U.S.. For the U.S., post-WWII data sources are conventional.
    -We also use cross country developed country data:
      -annual data going back 150+ years, up to 17 countries,
      -quarterly data back to 1971, up to 20 countries.

    Construction of ex-ante real rates

    -Real rate ≡ nominal policy rate – expected inflation
    -Policy rate: -discount rate (countries other than U.S.)
        -commercial paper rate, discount rate, Fed funds rate (U.S.)
    -Expected inflation from univariate AR in CPI inflation, rolling samples
        -annual: AR(1), rolling sample = 30 years
        -quarterly: AR(4), rolling sample = 40-80 quarters
      -Exception: U.S. uses GDP deflator 1929-2014
    Plots of quarterly and annual U.S. rates (see paper for plots from other



    I. Introduction
    II. Construction of ex-ante real rates

    III. The real rate, consumption growth and aggregate growth
    IV. The real rate and aggregate growth: empirical analysis
    V. Narrative evidence on real rates in the U.S.
    VI. Long run tendencies of the real rate
    VII. Monetary policy implications of uncertainty
    VIII. Conclusions

    III. The real rate, consumption growth and aggregate growth

    -Real rates are often modeled as tightly tied to growth in output or potential output.
      – New Keynesian models and their offshoots, e.g., Laubach and Williams (2003).
      -Discussion of secular stagnation, e.g. Summers (2013a,b).
    -Depending on the model, that link works in whole or in part through the link between real rates and consumption.
    -In this section, we note that in terms of rt and consumption, there is a
      -good theoretical case for a link between the two series, and
      -a poor empirical case for a link between the two series.

    Intertemporal IS


    rt and consumption


    I. Introduction
    II. Construction of ex-ante real rates
    III. The real rate, consumption growth and aggregate growth

    IV. The real rate and aggregate growth: empirical analysis
    V. Narrative evidence on real rates in the U.S.
    VI. Long run tendencies of the real rate
    VII. Monetary policy implications of uncertainty
    VIII. Conclusions

    IV. The real rate and aggregate growth: empirical analysis

    -Perhaps there will be a clear long-run relationship between trend output growth and the equilibrium rate, despite the weak evidence of such a relationship between consumption and the equilibrium rate.
    -We compute the sample correlation between average GDP growth and average real rates over various windows.
    -We focus on the sign and magnitude of this correlation. We do not attempt to supply an economic interpretation of the estimated correlation.
    -Preview of results: the sign of the correlation is not robust, but instead is sensitive to inclusion or exclusion of a small number of observations. As well, the absolute value of the magnitude of the correlation is small.

    Average GDP growth y vs average rt

    -We investigate the correlation between GDP growth and real rates with thefollowing data
      A. Business cycle (peak to peak) (U.S. data)
      B. 10 year averages (U.S. data)
      C. Averages over 10, 20, 30 and 40 years (cross-country data)
    -Our view is that we are taking averages over a long enough period that theaverage rate will closely track the equilibrium rate

    A. Average GDP growth y vs average rt: peak to peak (U.S. data)

    Unit of observation is (average GDP growth, average rt), computed from a business cycle peak to the next business cycle peak.

    1. Quarterly
      7 data points,
        1960:2-1969:4 delivers first observation,
        2001:1-2007:4 delivers last observation.

    2. Annual
      29 data points,
        1869-1873 delivers first observation,
        2001-2007 delivers last observation.









    Numerical values of correlations, peak to peak calculations

    Quarterly (N=7)      -0.40
    Quarterly, omit 1980:1-1981:3 (N=6)    0.32
    Annual (N=29)      0.23
    Annual, omit 1918-1920, 1944-1948 (N=27)    -0.23

    Correlations for other samples and data measures are reported in the paper (Exhibit 3.4). The numbers above are representative:
      (a)The absolute value of the correlation is small.
      (b)The sign of the correlation is sensitive to minor changes in sample.

    B. Average GDP growth y vs average rt: 10 year averages (U.S. data)Numerical values of correlations:

    Quarterly, 1968:1-2014:3 (N=187)      0.39
    Quarterly, 1968:1-2007:4 (N=160)     -0.19
    Annual, 1879-2014 (N=136)      -0.25
    Annual, 1879-2014, omit 1930-1950 (N=115)    0.31

    Correlations for other samples and data measures are reported in the paper (Exhibit 3.5). The numbers above are representative:
      (a)The absolute value of the correlation is small.
      (b)The sign of the correlation is sensitive to minor changes in sample.

    C. Average GDP growth y vs average rt: cross-country data

    -Quarterly data
    -Unit of observation is (average GDP growth, average r) for a given country,computed over four samples

      -2004:1-2014:2, N=20 countries; corr(y, r) = 0.23
      -1994:1-2014:2, N=18 countries; corr(y, r) = 0.63
      -1984:1-2014:2, N=15 countries; corr(y, r) = 0.42
      -1971:2-2014:2, N=13 countries; corr(y, r) = 0.36


    Summary: average GDP growth y vs average rt

    -Wide range of average ex-ante real interest rates associated with a given average output growth rate.
    -Weak correlation between average ex-ante real rate and average growth rate, with the sign of the correlation sensitive to inclusion or exclusion of a small number of observations.

    Summary: average GDP growth y vs average rt, cont’d

    Even if one puts more weight on samples with a positive correlation, wethink there are two implications:

    -If, indeed, we are headed for stagnation for supply side reasons (Gordon(2012, 2014)), any such slowdown should not be counted on to translate to alower equilibrium rate over periods as short as a cycle or two or a decade.
    -The relation between average output growth and average real rates is so noisy we are forced to conclude that other factors play a large, indeed dominant, role in determination of average real rates. In the next section we take a narrative approach to sorting out some of these factors.

    I. Introduction
    II. Construction of ex-ante real rates
    III. The real rate, consumption growth and aggregate growth
    IV. The real rate and aggregate growth: empirical analysis

    V. Narrative evidence on real rates in the U.S.
    VI. Long run tendencies of the real rate
    VII. Monetary policy implications of uncertainty
    VIII. Conclusions

    V. Narrative evidence on real rates in the U.S.

    -Lots of shifting or hard to model variables

      -Trend growth
      -Time varying volatility
      -Shape of utility function
      -Financial frictions
      -Incomplete markets
      -Heterogeneous agents
      -Monetary transmission mechanism

    -Historical narrative allows intuition to roam where formal analysis might flounder

    Narrative evidence: overview

    -Since our focus is on the equilibrium rate we continue to look at averages over various time periods (not necessarily peak to peak or exactly 10 years)
    -Goal: to understand what forces are influencing the equilibrium rate today.
    -Bottom line: the record supports a wide range of plausible values for the equilibrium rate, up to 2% or so.
    -This presentation: just a couple of points.

    A. Real rates in the 1991-2007 cycles
    B. Outlook for the current cycle

    A. Real rates in the 1991-2007 cycles

    -1991:1 (trough) -2001:1 (peak) cycle:
      -rt at or below 1.05% for nearly 2 years (1992:2-1993:4)
      -Subsequent peak: 4.7% (1998:2)
    -2001:4-2007:4 cycle:
      -rt at or below 0.3% for nearly four years (2001:4-2005:3), below zero for over 2 years (2002:4-2005:1)
      -Subsequent peak: 3.1% (2006:4)

    B. Outlook for the current cycle

      -Not a normal business cycle. Reinhart and Rogoff data on systemic banking crises:


    -Not a normal business cycle
    -Fiscal tightening: 2012-14: record 5% of GDP (see graph of deficit adjusted for automatic stabilizers as % of GDP)
      -Special bonus: lots of brinkmanship!


    History lessons

    1. Equilibrium rate sensitive to: changing policy transmission, regulatory headwinds, inflation cycles and delayed recoveries.
    2. Post-WWII data allows a wide range of estimates for the equilibrium real rate, even as high as 2%. That is a good deal higher than the near-zero number priced into the market.
    3. Given how hard it is to distinguish cycle and secular stagnation, we can’t think of a tougher time to pin down the equilibrium rate.

    I. Introduction
    II. Construction of ex-ante real rates
    III. The real rate, consumption growth and aggregate growth
    IV. The real rate and aggregate growth: empirical analysis
    V. Narrative evidence on real rates in the U.S.

    VI. Long run tendencies of the real rate
    VII. Monetary policy implications of uncertainty
    VIII. Conclusions

    VI. Long run tendencies of the real rate

    -Goal: develop and estimate time series model for annual data that can be used to forecast the U.S. real rate
    -End product: first order bivariate vector error correction model in U.S. real rate and the “world rate”

      -“error correction”: we treat real rates as nonstationary
      -“world rate”: median over our 17 countries of country-specific average real rates, computed in each country using 30 year rolling samples

    Nonstationary real rates?

    -Regime shifts / structural breaks / unit roots commonly found in time series on real rates, in the U.S. and other countries
    -We test for stability of the U.S. real rate, decisively rejecting the joint null of stationarity and stability.
    -We elect to model the data in differences, testing for stability of the VECM. The VECM does not reject the null of stability. It also does not reject the null that the constant terms are zero.

    The long run world rate

    -For country n (n=1,…,17) let rnt be the real rate.
    -In country n, compute the average real rate using the previous 30 years of data on rnt (i.e., roll through the sample using 30 year windows). Call this lnt.
    -The long run world rate lt is the median over n=1,…,17 of lnt.


    VECM Estimates


      -Feedback from Rt is substantial. If the U.S. rate is 1% below the world rate, then all else equal we expect the U.S. rate to move 40 basis points closer to the world rate in the next year.
      -Std dev of residual =260 basis points: despite cointegration, in any given year, substantial divergence between U.S. and world rate is possible.


    Variability and Uncertainty of Estimates in Some Other Studies


    Note: “Range” presents the lowest and highest value in the indicated sample,using the authors’s preferred specification. “Max discrep” is the maximum point in time discrepancy (i.e., maximum difference) in two estimates of the equilibrium rate at a given quarter, with the two estimates computed from seemingly similar specifications.


    I. Introduction
    II. Construction of ex-ante real rates
    III. The real rate, consumption growth and aggregate growth
    IV. The real rate and aggregate growth: empirical analysis
    V. Narrative evidence on real rates in the U.S.
    VI. Long run tendencies of the real rate

    VII. Monetary policy implications of uncertainty
    VIII. Conclusion

    VII. Monetary policy implications of uncertainty

    -What does uncertainty about the equilibrium rate imply for monetary policy?
    -Building on intuition and analysis of Orphanides and Williams (2002), we use FRB/US to quantify effects of uncertainty about r*
      -Notation: r* = equilibrium rate

    Orphanides and Williams (2002)

    -Orphanides and Williams (“OW”, 2002) consider optimal monetary policy rules using a small stylized model of the US economy.
    -They show that uncertainty around r* implies that the optimal monetary policy rule should be more “inertial” than a standard Taylor rule.
    -An inertial rule puts more weight on the lagged funds rate and less weight on the estimated value of r*.
    -An extreme inertial rule is a”difference rule.” In a difference rule, the change in interest rates depends on the level of inflation and the employment gap.

    Our analysis

    -We revisit the OW analysis in a richer and more realistic setting, namely the board staff model FRB/US benchmarked to the FOMC’s current economic outlook.
    -We augment FRB/US by introducing errors into the FOMC’s perception of r*, which feed back into the path for the federal funds rate.
    -We then compare the behavior of the economy under a standard Taylor (1999) rule with the behavior of the economy under an alternative “inertial” Taylor 1999 rule that responds only partially to changes in the estimate of r*.

    Some details

    -We use the unemployment rate ut as the real activity variable, in both the Taylor rule and the Fed’s loss function.
    -The loss function is expected discounted sum of per period losses, with baseline


    Taylor rule

    -Taylor rule is maximum of 0 and


    Modeling uncertainty about r*

    -Uncertainty about r* means:
      -One possible path for r*t going forward is one consistent with the Dec. 2014 SEP projections for i, Ï€ and u, along with Ï€*=2.0 and the FRB/US path for u*t. This the SEP-consistent or baseline path.
      -Two other possible paths begin as ±150 bp from the medium path, converging to the baseline path in 2020 (picture on next overhead). These are the high and low paths. (With robustness checks for ±50 and ±250.)
      -In computing expected losses, each of the three paths are equally likely (each have probability 1/3).


    Our results

    -We assume backward looking expectations, with a quick check for model consistent expectations.
    -Consistent with OW, we find that a more “inertial” policy rule leads to better economic performance if there is more uncertainty around r* (next overhead).
    -We show that when the starting point for the funds rate is zero, an “inertial” policy rule yields a later but steeper normalization.






    Broader context

    -Our analysis complements other arguments for a later liftoff:
    -While some job market measures such as job openings and headline unemployment have tightened a lot, broad measures such as U6 and E/P still show substantial slack.
    -The continued weakness of nominal wage growth supports a focus on broad as opposed to narrow slack measures.
    -Core inflation remains well below the 2% target, and only some of this is explained by oil and dollar pass-through.
    -The risks to global growth and inflation remain on the downside. At the ZLB, hiking too early is riskier than hiking too late.

    I. Introduction
    II. Construction of ex-ante real rates
    III. The real rate, consumption growth and aggregate growth
    IV. The real rate and aggregate growth: empirical analysis
    V. Narrative evidence on real rates in the U.S.
    VI. Long run tendencies of the real rate
    VII. Monetary policy implications of uncertainty

    VIII. Conclusion

    VIII. Conclusion

    -There is much uncertainty about the equilibrium rate, which varies considerably over time, and arguably is well modeled as having a unit root.
    -The determinants of the equilibrium rate are manifold and time varying, with the effects of trend output growth generally dominated by those of other factors.
    -A vector error correction model that looks only to U.S. and world real rates well captures the behavior of U.S. real rates.
    -Looking forward, a plausible range for the equilibrium rate is wide, perhaps ranging from a little above 0 up to 2%.