Tom Boot (Econometric Institute, EUR)
27 August 2015
Controlled shrinkage and variable selection
A shrinkage and selection estimator is introduced which controls the probability of exceeding the oracle shrinkage factor. When used to shrink individual coefficients, this allows straightforward control over the probability at which regressors are erroneously deleted. Alternatively, when used to shrink coefficients using a common factor, the estimator dominates the risk of the ordinary least squares (OLS) estimator when the number of regressors k > 3 and the exceedance probability is set smaller or equal than 0.5. The method is illustrated throughout by an empirical example on cross-country GDP growth. We apply the estimator both in a setting where one has no a priori interest in specific variables, as well as when some variables are included only to increase the accuracy of selected target variables.
Michael Rockinger (University of Lausanne)
10 September 2015
Optimal Long-Term Allocation with Pension Fund Liabilities
We build a macroeconomic model for Switzerland, the Euro Area, and the USA that drives the dynamics of several asset classes and the liabilities of a representative Swiss (defined-contribution) pension fund. This encompassing approach allows us to generate correlations between returns on assets and liabilities. We calibrate the economy using quarterly data between 1985:Q1 and 2013:Q2. Using a certainty equivalent approach, we demonstrate that a liabilities-hedging portfolio outperforms an assets-only strategy by between 5% and 15% per year. The main reason for such a large improvement is that the optimal assets-only portfolio is typically long in cash, whereas hedging liabilities require the pension fund to be short in cash. It follows that imposing positivity restrictions in the construction of the portfolio also results in a large cost, between 4% and 8% per year. This estimate suggests that allowing pension funds to hedge their liabilities through borrowing cash and investing in a diversified bond portfolio helps to enhance the global portfolio return.
Co-author: Eric Jondeau
Charles Bos (Vrije Universiteit Amsterdam)
24 September 2015
A Quantile-based Realized Measure of Variation: New Tests for Outlying Observations in Financial Data
In this article we introduce a new class of test statistics designed to detect the occurrence of abnormal observations. It derives from the joint distribution of moment- and quantile- based estimators of power variation σ^r, under the assumption of a normal distribution for the underlying data. Our novel tests can be applied to test for jumps and are found to be generally more powerful than widely used alternatives. An extensive empirical illustration for high-frequency equity data suggests that jumps can be more prevalent than inferred from existing tests on the second or third moment of the data.
Co-author: Pawel Janus
Fang Xu (Reading University)
1 Oktober 2015
Local trends in price-to-dividend ratios - assessment, predictive value and determinants
Persistent variations of the log price-to-dividend ratio (PD) in the US and their potential economic determinants have attracted a lively discussion in the literature. Adopting a present value model, we suggest a gradually time-varying state process to govern the persistence of the PD. In comparison with models presuming a constant mean or discrete mean shifts, the adopted state space approach offers favourable model diagnostics, and finds particular support in out-of-sample stock return prediction. Regarding potential economic trends behind the persistence of the PD during the past 60 years, we show that this slowly evolving mean process is jointly shaped by the consumption risk, the demographic structure of the population and the proportion of firms with traditional dividend payout policy. In particular, the volatility of consumption growth plays the dominant role.
Federico Bandi (Johns Hopkins University)
15 Oktober 2015
Low-frequency asset pricing dynamics
Stock return predictive relations found to be elusive when using raw data may hold true for different low-frequency components of the data. Similarly, cross-sectional asset pricing models shown not to be supported by raw data may be satisfied when risk is suitably defined with respect to low-frequency components of the aggregate series. Consistent with this premise, the presentation discusses a novel approach to the analysis of financial time series viewed as the result of a cascade of shocks operating at different frequencies. The approach leads to new asset pricing models as well as to formal justifications for existing results in the finance literature relying on aggregation.
Peter Reinhard Hansen (European University Institute)
29 Oktober 2015
Realized Factor GARCH
We introduce a multivariate GARCH model that utilizes realized measures of volatility and correlations. The model has a hierarchal factor structure, where the core of the model specifies the underlying volatility factors, which form the basis for the modeling of all individual returns series. This structure makes the model tractable and scalable. We apply the model to US equity data, where the underlying factor structure is deduced from industry specific exchange-traded index funds.
Co-author: Asger Lunde
Thorsten Joachims (Cornell University)
5 November 2015
From Contextual Bandits to Conditional Treatment Effects
Log data is one of the most ubiquitous forms of data available, as it can be recorded from a variety of systems (e.g., search engines, recommender systems, ad placement) at little cost. The interaction logs of such systems typically contain a record of the input to the system (e.g., features describing the user), the prediction made by the system (e.g., a recommended list of news articles) and the feedback about the quality of this prediction (e.g., number of articles the user read). This feedback, however, provides only partial-information feedback -- aka ``contextual bandit feedback'' -- limited to the particular prediction shown by the system. This is fundamentally different from conventional supervised learning, where ``correct'' predictions (e.g., the best ranking of news articles for that user) together with a loss function provide full-information feedback. In this talk, I will explore approaches and methods for batch learning from logged bandit feedback (BLBF). Unlike the well-explored problem of online learning with bandit feedback, batch learning with bandit feedback does not require interactive experimental control of the underlying system, but merely exploits log data collected in the past. The talk explores how Empirical Risk Minimization can be used for BLBF, the suitability of various counterfactual risk estimators in this context, and a new learning method for structured output prediction in the BLBF setting. From this, I will draw connections to methods for estimating conditional average treatment effects.
Co-author: Adith Swaminathan
Laurent Pauwels (University of Sydney)
12 November 2015
Some theoretical results on forecast combinations
This paper proposes a unified framework to analyse the theoretical properties of forecast combination. The proposed framework not only is useful for deriving all existing results with ease but also provides important insights into two unresolved puzzles of forecast combination. Specifically, this paper aims to explain why a simple average of forecasts often outperforms forecasts from single models in the sense of mean squared forecast errors (MSFE) and to determine why a more complicated weighting scheme does not always perform better than a simple average. While this paper obtains several new theoretical results, two of them are particularly important in practice. First, the MSFE of forecast combination decreases as the number of models increases. Second, the conventional approach to selecting optimal models based on a simple comparison of MSFEs without further statistical testing will lead to biased results.
Co-author: Felix Chan
Kris Boudt (Vrije Universiteit Brussel)
19 November 2015
Positive Semidefinite Integrated Covariance Estimation, Factorizations and Asynchronicity
An estimator of the ex-post covariation of log-prices under asynchronicity and microstructure noise is proposed. It uses the Cholesky factorization on the covariance matrix in order to exploit the heterogeneity in trading intensities to estimate the different parameters sequentially with as many observations as possible. The estimator is positive semidefinite by construction. We derive asymptotic results and Monte Carlo simulations confirm good finite sample properties. In the application we forecast portfolio Value-at-Risk and sector risk exposures for a portfolio of 52 stocks. We find that the dynamic models utilizing the proposed high-frequency estimator provide statistically and economically superior forecasts.
Joakim Westerlund (Lund University)
3 December 2015
Testing for Predictability in Panels with General Predictors
The difficulty of predicting returns has recently motivated researchers to start looking for tests that are either more powerful or robust to more features of the data. Unfortunately, the way that these tests work typically involves trading robustness for power or vice versa. The current paper takes this as its starting point to develop a new panel-based approach to predictability that is both robust and powerful. Specifically, while the panel route to increased power is not new, the way in which the cross-section variation is exploited to achieve also robustness with respect to the predictor is. The result is two new tests that enable asymptotically standard normal and chi-squared inference across a wide range of empirically relevant scenarios in which the predictor may be stationary, unit root non-stationary, or anything in between. The type of cross-section dependence that can be permitted in the predictor is also very general, and can be weak, strong, or indeed anything in between. What is more, this generality comes at no cost in terms of complicated test construction. The new tests are therefore very user-friendly.
Ryo Okui (Kyoto University)
10 December 2015
Asymptotic Inference for Dynamic Panel Estimators of Infinite Order Autoregressive Processes
In this paper we consider the estimation of a dynamic panel autoregressive (AR) process of possibly infinite order in the presence of individual effects. We employ double asymptotics under which both the cross-sectional sample size and the length of time series tend to infinity and utilize the sieve AR approximation with its lag order increasing with the sample size. We establish the consistency and asymptotic normality of the fixed effects estimator and propose bias-corrected fixed effects estimator based on theoretical asymptotic bias term. The properties of the generalized method of moments estimator and Hayakawa’s instrumental variables estimator are also examined. Monte Carlo simulations demonstrate the usefulness of bias correction. As an illustration, proposed methods are applied to dynamic panel estimation of the law of one price deviations among US cities.