Seminars in Econometrics 2016

16 February 2016 

The Practice of FX Spot Trading and Competition amongst Liquidity Providers

Roel Oomen (Deutsche Bank)

No abstract

25 February 2016

Bias-corrected Common Correlated Effects Pooled estimation in homogeneous dynamic panels

Gerdie Everaert (University of Gent)

Abstract

This paper extends the Common Correlated Effects Pooled (CCEP) estimator designed by Pesaran (2006) to dynamic homogeneous models. For static panels, this estimator is consistent as the number of cross-sections (N) goes to infinity irrespectively of the time series dimension (T). However, it suffers from a large bias in dynamic models when T is fixed Everaert and De Groote (2016). We develop a bias-corrected CCEP estimator based on an asymptotic bias expression that is valid for a multi-factor error structure provided that a sufficient number of cross-sectional averages, and lags thereof, are added to the model.

We show that the resulting CCEPbc estimator is consistent as N tends to infinity, both for T fixed or T growing large, and derive its limiting distribution. Monte Carlo experiments show that our bias correction performs very well. It is nearly unbiased, even when T and/or N are small, and hence offers a strong improvement over the severely biased CCEP estimator. CCEPbc is also found to be  superior to alternative bias correction methods available in the literature in terms of bias, variance and inference.

31 March 2016

Robust and nonparametric detection of change-points in time series using U-statistics and U-quantiles

Roland Fried (TU Dortmund)

Abstract

Tests for detecting change-points in weakly dependent (more precisely: near epoch dependent) time series are studied. As examples, we will be able to treat most standard models of time series analysis, such as ARMA and GARCH processes.

The presentation will give certain emphasis to the basic problem of testing for an abrupt shift in location, but other questions like a change in variability will also be considered. The popular CUSUM test is not robust to outliers and can be improved in case of non-normal data, particularly for heavy-tails. The CUSUM test can be modified using the Hodges-Lehmann 2-sample estimator, which is the median of all pairwise differences between the samples. It is highly robust and has a high efficiency under normality.

Like for a related test based on the 2-sample Wilcoxon statistic, the asymptotics of the Hodges-Lehmann change-point test can be established under general conditions without any moment assumptions. Both tests offer similar power against shifts in the center of the data, but the test based on the Hodges-Lehmann estimator performs superior if a shift occurs far from the center. MOSUM-type tests restrict attention to data in two subsequent moving time windows.

This may overcome possible masking effects due to several shifts into different directions. The talk investigates CUSUM- and MOSUM-type tests based on the 2-sample Wilcoxon statistic or the Hodges-Lehmann estimator by analyzing asymptotical properties and by comparing the performance in finite samples via simulation experiments.

(Joint work with Herold Dehling and Martin Wendler)

7 April 2016

Testing no cointegration in large VARs

Alexei Onatskiy (University of Cambridge)

Abstract

We study the asymptotic behavior of Johansen's (1988, 1991) likelihood ratio test for no cointegration when the number of observations and the dimensionality of the vector autoregression diverge to infinity simultaneously and proportionally. We find that the empirical distribution of the squared canonical correlations that the test is based on converges to the so-called Wachter distribution.

This finding provides a theoretical explanation for the observed tendency of the test to find "spurious cointegration" in the data. It also sheds light on the workings and limitations of the Bartlett correction approach to the over-rejection problem. We propose a simple graphical device, similar to the scree plot, as a quick check of the null hypothesis of no cointegration in high-dimensional VARs.

14 April 2016

In Search of a Nominal Anchor: What Drives Long-Term Inflation Expectations?

Emanuel Moench (Deutsche Bundesbank)

Abstract

According to both central bankers and economic theory, anchored inflation expectations are key to successful monetary policymaking. Yet, we know very little about the determinants of those expectations. While policymakers may take some comfort in the stability of long-run inflation expectations, the latter is not an inherent feature of the economy. What does it take for expectations to become unanchored?

We explore a theory of expectations formation that can produce episodes of unanchoring. Its key feature is state-dependency in the sensitivity of long-run inflation expectations to short-run inflation surprises. Price-setting agents act as econometricians trying to learn about average long-run inflation. They set prices according to their views about future inflation, which hence feed back into actual inflation. When expectations are anchored, agents believe there is a constant long-run inflation rate, which they try to learn about. Hence, their estimates of long-run inflation move slowly, as they keep adding observations to the sample they consider.

However, in the spirit of Marcet and Nicolini (2003), a long enough sequence of inflation suprises leads agents to doubt the constancy of long-run inflation, and switch to putting more weight on recent developments. As a result, long-run inflation expectations become unanchored, and start to react more strongly to short-run inflation surprises. Shifts in agents’ views about long-run inflation feed into their price-setting decisions, imparting a drift to actual inflation. Hence, actual inflation can show persistent swings away from its long-run mean. We estimate the model using actual inflation data, and only short-run inflation forecasts from surveys. The estimated model produces long-run forecasts that track survey measures extremely well. The estimated model has several uses:

  1. It can tell a story of how inflation expectations got unhinged in the 1970s; it can also be used to construct a counterfactual history of inflation under anchored long-run expectations.
  2. At any given point in time, it can be used to compute the probability of inflation or deflation scares.
  3. If embedded into an environment with explicit monetary policy, it can also be used to study the role of policy in shaping the expectations formation mechanism.

Co-auteurs: Carlos Carvalho, Stefano Eusepi and Bruce Preston

12 May 2016

Detecting anomalous data cells

Peter Rousseeuw (KU Leuven)

Abstract

A multivariate dataset consists of n cases in d dimensions, and is often stored in an n by d data matrix.

It is well-known that real data may contain outliers. Depending on the circumstances, outliers may be (a) undesirable errors which can adversely affect the data analysis, or (b) valuable nuggets of unexpected information. In statistics and data analysis the word outlier usually refers to a row of the data matrix, and the methods to detect such outliers only work when at most 50% of the rows are contaminated.

But often only one or a few cell values in a row are outlying, and they may not be found by looking at each variable (column) separately. We propose the first method to detect cellwise outliers in
the data which takes the correlations between the variables into account. It has no restriction on the number of contaminated rows, and can deal with high dimensions. Other advantages are that it provides estimates of the `expected' values of the outlying cells, while imputing the missing values at the same time.
We illustrate the method on several real data sets, where it uncovers more structure than found by purely columnwise methods or purely rowwise methods.

This is joint work with Wannes Van den Bossche of the KU Leuven.

19 May 2016

Conditional Inference with a Functional Nuisance Parameter

Isaiah Andrews (Harvard University)

Abstract

This paper shows that the problem of testing hypotheses in moment condition models without any assumptions about identification may be considered as a problem of testing with an infinite-dimensional nuisance parameter.

We introduce a sufficient statistic for this nuisance parameter in a Gaussian problem and propose conditional tests. These conditional tests have uniformly correct asymptotic size for a large class of models and test statistics.

We apply our approach to construct tests based on quasi-likelihood ratio statistics, which we show are efficient in strongly identified models and perform well relative to existing alternatives in two examples.

26 May 2016

Deep Learning from Small Data

Max Welling (University of Amsterdam)

Abstract

Deep learning has become the dominant modeling paradigm in machine learning. It has been spectacularly successful in application areas ranging from speech recognition, image analysis, natural language processing, and information retrieval. But a number of important challenges remain un(der)solved, such as data efficient deep learning, energy efficient deep learning and visualizing deep neural networks. In this talk I will address the problem of “data efficient deep learning” through three distinct approaches: 

  1. Combining generative probabilistic (graphical models) with deep learning using variational auto-encoders (w/ D. Kingma), 
  2. Bayesian deep learning using variational approximations based on matrix-normal distributions on random matrices  (w/ C. Louizos)
  3. Exploiting symmetries using Group-equivariant CNNs (w/ T. Cohen) 

2 June 2016

Consistent estimation of optimized functions for the analysis of portfolio strategies

Diego Ronchetti (University of Groningen)

Abstract

This paper introduces a novel technique for the consistent estimation of models described by restrictions on optimized conditional moments of state and control variables. The method is nonparametric with respect to the dynamics of these variables, and does not require data on the moment optimizer. The technique is illustrated in a financial application: the estimation of portfolio weights and other properties of the unobservable self-financing strategy that best replicates target cash-flows in a Markovian setting. In addition, the paper discusses how the technique can be employed for the estimation of other optimized functions of state and control variables that are of interest in economic applications, such as maximized expected individual intertemporal utilities in microeconomic models.

9 August 2016 

Posterior Inference for Portfolio Weights

Christoph Freay (University of Konstanz)

Abstract

We investigate estimation uncertainty in portfolio weights through their posterior distributions in a Bayesian regression framework. While we derive analytical posterior results for shrinkage variants of the global minimum variance portfolio (GMVP), the main advantage of our novel approach is that we specify the prior directly on the optimal portfolio weights. This avoids estimating the moments of the asset return distribution and substantially reduces the dimensionality of the estimation problem. In a series of empirical experiments we explore the effect of estimation errors on the performance of the optimal portfolio and propose various practical trading strategies derived from the posterior distribution, which are highly beneficial to the investor. We further show how to incorporate economic views about asset returns in our framework as shrinkage targets and how to account for the investor’s uncertainty about these views through a hierarchical set-up.

11 August 2016

Refined exogeneity tests in linear dynamic panel data models

Milan Pleus (University of Amsterdam)

Abstract

Exogeneity tests are investigated in linear dynamic panel data models, estimated by GMM. Because in that context usually just internal instruments are being exploited, misclassification of explanatory variables renders either a specific subset of instruments invalid or yields inefficient estimates. Rather than testing all overidentifying restrictions by the Sargan-Hansen test, the focus is on subsets using either the incremental Sargan-Hansen test or a Hausman test. Although it is known in the literature that the Sargan-Hansen test suffers when using many instruments, it is  yet unclear in what way the incremental test is affected. Therefore, test statistics are considered in which the number of employed instruments is deliberately restricted. Two possible refinements are proposed. The procedure of Hayakawa (2014), which forces a block diagonal structure on the weighting matrix in order to reduce problems stemming from taking its inverse, is generalized to the incremental test and a finite sample corrected variance estimate for the vector of contrasts is derived from which two new Hausman test statistics are constructed. Simulation is used to investigate finite sample performance. One of corrected Hausman test statistics and a specific implementation of the incremental Sargan-Hansen test, both using only the one-step residuals calculated under the null hypothesis, are found to perform best in terms of size.

18 August 2016

Three-valued simple games and applications to minimum coloring problems

Marieke Musegaas (Tilburg University)

Abstract

We introduce the model of three-valued simple games as a natural extension of simple games. We analyze the core and the Shapley value of three-valued simple games. Using the concept of vital players as an extension of veto players, the vital core is constructed and we show that the vital core is a subset of the core. The Shapley value is characterized on the class of all three-valued simple games. As an application, we characterize the class of conflict graphs inducing simple or three-valued simple minimum coloring games. We provide an upper bound on the number of maximum cliques of conflict graphs inducing such games. Moreover, it is seen that in case of a perfect conflict graph, the core of an induced three-valued simple minimum coloring game equals the vital core. 

1 September 2016

Significance Bands

Oscar Jorda (University of California, Davis)

Abstract

This paper shows that a simple approximate 95% point-wise significance band for the response of an outcome variable to an intervention is given by an interval centered around zero defined by plus or minus two times the square root of n times the ratio of the conditional variance of the outcome variable to the conditional variance of the intervention variable. The paper shows how to calculate significance bands more generally, both analytically and by simulation, in the context of impulse responses calculated by local projections or using inverse probability weighting methods.  

Co-auteur: Guido Kuersteiner

15 September 2016

Reputation Management and Assessment in the Lab

Bauke Visser (Erasmus School of Economics)

Abstract

In a ‘reputation game,’ reputation-concerned agents use decisions and accompanying statements to influence assessments of their competence, and evaluators take such attempts into account when assessing them. We test the theoretical implications in the lab by comparing treatments with and without reputation concerns, and with and without statements. Reputation concerns make statements less informative. Evaluators assess agents quite well. Reputation concerns make assessments less responsive to decisions and statements, but evaluators overly react to infrequent statements and are too tough on agents if they only observe decisions. Contrary to theory, if statements accompany decisions, agents distort the decision less.

Co-author: Sander Renes

22 September 2016

Model-based Analysis of Purchase Behavior in Large Assortments

Bruno Jacobs (Erasmus School of Economics)

Abstract

An accurate prediction of what a customer will purchase next is of paramount importance to successful online retailing. In practice, customer purchase history data is readily available to make such predictions, sometimes complemented with customer characteristics. Given the large product assortments maintained by online retailers, scalability of the prediction method is just as important as its accuracy. We study two classes of models that use such data to predict what a customer will buy next, i.e., a novel approach that uses latent Dirichlet allocation (LDA), and mixtures of Dirichlet-Multinomials (MDM).

A key benefit of a model-based approach is the potential to accommodate observed customer heterogeneity through the inclusion of predictor variables. We show that LDA can be extended in this direction while retaining its scalability. We apply the models to purchase data from an online retailer and contrast their predictive performance with that of a collaborative filter and a discrete choice model. Both LDA and MDM outperform the other methods. Moreover, LDA attains performance similar to that of MDM while being far more scalable, rendering it a promising approach to purchase prediction in large product assortments.

20 October 2016

Structural modeling of sales and prices in the 2014 new car market in France

Matthieu de Lapparent (Ecole Polytechnique Federale de Lausanne)

Abstract

In this presentation is developed a structural nonlinear equilibrium model to characterize sales and prices in the 2014 new car market in France. As is now popular in empirical industrial organization (e.g. Einav and Levin, 2010), consumers are assumed to maximize utility when choosing among a discrete set of products (including an outside good). Aggregate demands and market shares are derived from this microeconomic framework. Berry et al. (1995, 2004) detailed how the workhorse mixed logit discrete choice model can be implemented when only market shares are available. Fosgerau and de Palma (2016) recently proposed a generalized framework for demand analysis using market shares.

Several authors (e.g. Zenetti and Otter (2014)) pointed out that it is more informative to use directly aggregate finite purchase counts when available. It is here the case. Their probability distribution is characterized by parameters that model market shares of the different types of cars. It is here assumed that market shares take the form of mixtures of error component logit probabilities. The error component structure captures unobserved car attributes. It offers enough flexibility and practicability to generate realistic correlation patterns across choice alternatives.

On the supply side is assumed a Nash-Bertrand competition in price for multi-product car manufacturers. Optimal prices are characterized as nonlinear functions of market shares and marginal production costs. Up to additional, yet standard, assumptions about distribution of unobservables on the supply side (marginal cost equation) and how they are correlated with unobservable car attributes that affect demand, it can contentedly be combined with aggregate purchase counts to derive the analytical formulation of the joint distribution of sales and prices.

As contrary to sole demand analysis where price endogeneity is solved by a purely descriptive and auxiliary regression model (or other IV techniques), the proposed econometric specification explicitly models the sources of price endogeneity in equilibrium formation, namely simultaneity and unobserved correlation in the error terms.

The approach is applied using data for the new car market in France. Market is made of about 1500 car alternatives produced by around 30 manufacturers in year 2014. Sensitivity to the definition of the no-purchase option is discussed. Heterogeneity of consumers is modeled by accounting for income distribution in the population.

Bayesian inference is carried out to estimate posterior distributions of parameters. Choice of prior distributions is discussed. The NUTS sampler is used (Hofman and Gelman, 2014) in a Hamiltonian MCMC setting (Neal, 2011) with an additional data augmentation step (Albert and Chib, 1993).
The estimates are used to derive distributions of willingness-to-pay for car attributes, own and cross-price elasticities, and price-cost margins. Some merger simulations are also proposed.

27 October 2016

Focused Information Criterion and Model Averaging for Large Panels with a Multifactor Error Structure

Chu-Ann Liu (Academia Sinica)

Abstract

This paper considers model selection and model averaging in panel data models with a multifactor error structure. We investigate the limiting distributions of the common correlated effects estimators (Pesaran, 2006) in a local asymptotic framework and show that the trade-off between bias and variance remains in the asymptotic theory. In addition, we find that adding more regressors could have positive or negative effects on estimation variance. We then propose a focused information criterion and a plug-in averaging estimator for large heterogeneous panels.

The novel feature of the proposed method is that it aims to minimize the sample analog of the asymptotic mean squared error and can apply to the cases irrespective of whether the rank condition holds or not. Monte Carlo simulations show that both proposed selection and averaging methods generally achieve lower expected squared error than other methods.

3 November 2016

Bayesian Nonparametric Calibration and Combination of Predictive Distributions

Francesco Ravazzolo (Free University of Bozen/Bolzano)

Abstract

We introduce a Bayesian approach to predictive density calibration and combination that accounts for parameter uncertainty and model set incompleteness through the use of random calibration functionals and random combination weights.  Building on the work of Ranjan and Gneiting (2010, 2013), we use infinite beta mixtures for the calibration.  The proposed Bayesian nonparametric approach takes advantage of the flexibility of Dirichlet process mixtures to achieve any continuous deformation of linearly combined predictive distributions. 

The inference procedure is based on Gibbs sampling and allows accounting for uncertainty in the number of mixture components, mixture weights, and calibration parameters.  The weak posterior consistency of the Bayesian nonparametric calibration is provided under suitable conditions for unknown true density.  We study the methodology in simulation examples with fat tails and multimodal densities and apply it to density forecasts of daily S\&P returns and daily maximum wind speed at the Frankfurt airport.

Link to the paper: Cornell University Library

10 November 2016

Time Series Analysis in Psychology

Casper Albers (University of Groningen)

Abstract

The analysis of time series data has been common in many areas of research for decades, but for psychological research the large-scale collection of longitudinal data is a recent development. Through technological advances, we are now able to repeatedly measure psychological variables such as emotions repeatedly over time – a process called ecological momentary assessment (EMA). Due to different aspects of these EMA-data, the application of standard time series models to these data is not straightforward.

EMA-data are usually collected at relatively few, say 25-50, time points. Common autoregression-parameter estimators have been developed based upon asymptotic arguments, which clearly don’t work for such short series. In the first half of my presentation, I will focus on the consequences of this for the estimation of parameters in the model, both for single case studies as for multilevel/hierarchical designs.


In the second half of the talk, I will discuss the Bayesian dynamic model (BDM). This Bayesian extension of state space models offers a versatile class of modelling approaches very useful for typical psychological data. I will outline the benefits of this class of models and showcase it using two examples from psychological practice. The first example stems from emotion research, in the second we compare different treatments for patients suffering from anxiety attacks.

17 November 2016

Title: TBA 

Joris Pinkse (Penn State University )

No abstract

1 December 2016

Title: TBA

Jose Montiel Olea (New York University)

No abstract

8 December 2016

Title: TBA

Yanqin Fan (University of Washington)

No abstract

Compare @count study programme

  • @title

    • Duration: @duration
Compare study programmes