Schedule fall 2016
Oscar Jorda (University of California, Davis)
1 September 2016
This paper shows that a simple approximate 95% point-wise significance band for the response of an outcome variable to an intervention is given by an interval centered around zero defined by plus or minus two times the square root of n times the ratio of the conditional variance of the outcome variable to the conditional variance of the intervention variable. The paper shows how to calculate significance bands more generally, both analytically and by simulation, in the context of impulse responses calculated by local projections or using inverse probability weighting methods.
Co-auteur: Guido Kuersteiner
Bauke Visser (Erasmus School of Economics)
15 September 2016
Reputation Management and Assessment in the Lab
In a ‘reputation game,’ reputation-concerned agents use decisions and accompanying statements to influence assessments of their competence, and evaluators take such attempts into account when assessing them. We test the theoretical implications in the lab by comparing treatments with and without reputation concerns, and with and without statements. Reputation concerns make statements less informative. Evaluators assess agents quite well. Reputation concerns make assessments less responsive to decisions and statements, but evaluators overly react to infrequent statements and are too tough on agents if they only observe decisions. Contrary to theory, if statements accompany decisions, agents distort the decision less.
Co-author: Sander Renes
Bruno Jacobs (Erasmus School of Economics)
22 September 2016
Model-based Analysis of Purchase Behavior in Large Assortments
An accurate prediction of what a customer will purchase next is of paramount importance to successful online retailing. In practice, customer purchase history data is readily available to make such predictions, sometimes complemented with customer characteristics. Given the large product assortments maintained by online retailers, scalability of the prediction method is just as important as its accuracy. We study two classes of models that use such data to predict what a customer will buy next, i.e., a novel approach that uses latent Dirichlet allocation (LDA), and mixtures of Dirichlet-Multinomials (MDM). A key benefit of a model-based approach is the potential to accommodate observed customer heterogeneity through the inclusion of predictor variables. We show that LDA can be extended in this direction while retaining its scalability. We apply the models to purchase data from an online retailer and contrast their predictive performance with that of a collaborative filter and a discrete choice model. Both LDA and MDM outperform the other methods. Moreover, LDA attains performance similar to that of MDM while being far more scalable, rendering it a promising approach to purchase prediction in large product assortments.
Matthieu de Lapparent (Ecole Polytechnique Federale de Lausanne)
20 October 2016
Structural modeling of sales and prices in the 2014 new car market in France
In this presentation is developed a structural nonlinear equilibrium model to characterize sales and prices in the 2014 new car market in France. As is now popular in empirical industrial organization (e.g. Einav and Levin, 2010), consumers are assumed to maximize utility when choosing among a discrete set of products (including an outside good). Aggregate demands and market shares are derived from this microeconomic framework. Berry et al. (1995, 2004) detailed how the workhorse mixed logit discrete choice model can be implemented when only market shares are available. Fosgerau and de Palma (2016) recently proposed a generalized framework for demand analysis using market shares.
Several authors (e.g. Zenetti and Otter (2014)) pointed out that it is more informative to use directly aggregate finite purchase counts when available. It is here the case. Their probability distribution is characterized by parameters that model market shares of the different types of cars. It is here assumed that market shares take the form of mixtures of error component logit probabilities. The error component structure captures unobserved car attributes. It offers enough flexibility and practicability to generate realistic correlation patterns across choice alternatives.
On the supply side is assumed a Nash-Bertrand competition in price for multi-product car manufacturers. Optimal prices are characterized as nonlinear functions of market shares and marginal production costs. Up to additional, yet standard, assumptions about distribution of unobservables on the supply side (marginal cost equation) and how they are correlated with unobservable car attributes that affect demand, it can contentedly be combined with aggregate purchase counts to derive the analytical formulation of the joint distribution of sales and prices.
As contrary to sole demand analysis where price endogeneity is solved by a purely descriptive and auxiliary regression model (or other IV techniques), the proposed econometric specification explicitly models the sources of price endogeneity in equilibrium formation, namely simultaneity and unobserved correlation in the error terms.
The approach is applied using data for the new car market in France. Market is made of about 1500 car alternatives produced by around 30 manufacturers in year 2014. Sensitivity to the definition of the no-purchase option is discussed. Heterogeneity of consumers is modeled by accounting for income distribution in the population.
Bayesian inference is carried out to estimate posterior distributions of parameters. Choice of prior distributions is discussed. The NUTS sampler is used (Hofman and Gelman, 2014) in a Hamiltonian MCMC setting (Neal, 2011) with an additional data augmentation step (Albert and Chib, 1993).
The estimates are used to derive distributions of willingness-to-pay for car attributes, own and cross-price elasticities, and price-cost margins. Some merger simulations are also proposed.
Chu-Ann Liu (Academia Sinica)
27 October 2016
Focused Information Criterion and Model Averaging for Large Panels with a Multifactor Error Structure
This paper considers model selection and model averaging in panel data models with a multifactor error structure. We investigate the limiting distributions of the common correlated effects estimators (Pesaran, 2006) in a local asymptotic framework and show that the trade-off between bias and variance remains in the asymptotic theory. In addition, we find that adding more regressors could have positive or negative effects on estimation variance. We then propose a focused information criterion and a plug-in averaging estimator for large heterogeneous panels. The novel feature of the proposed method is that it aims to minimize the sample analog of the asymptotic mean squared error and can apply to the cases irrespective of whether the rank condition holds or not. Monte Carlo simulations show that both proposed selection and averaging methods generally achieve lower expected squared error than other methods.
Francesco Ravazzolo (Free University of Bozen/Bolzano)
3 November 2016
Bayesian Nonparametric Calibration and Combination of Predictive Distributions
We introduce a Bayesian approach to predictive density calibration and combination that accounts for parameter uncertainty and model set incompleteness through the use of random calibration functionals and random combination weights. Building on the work of Ranjan and Gneiting (2010, 2013), we use infinite beta mixtures for the calibration. The proposed Bayesian nonparametric approach takes advantage of the flexibility of Dirichlet process mixtures to achieve any continuous deformation of linearly combined predictive distributions. The inference procedure is based on Gibbs sampling and allows accounting for uncertainty in the number of mixture components, mixture weights, and calibration parameters. The weak posterior consistency of the Bayesian nonparametric calibration is provided under suitable conditions for unknown true density. We study the methodology in simulation examples with fat tails and multimodal densities and apply it to density forecasts of daily S\&P returns and daily maximum wind speed at the Frankfurt airport.
Link to the paper: Cornell University Library
Casper Albers (University of Groningen)
10 November 2016
Time Series Analysis in Psychology
The analysis of time series data has been common in many areas of research for decades, but for psychological research the large-scale collection of longitudinal data is a recent development. Through technological advances, we are now able to repeatedly measure psychological variables such as emotions repeatedly over time – a process called ecological momentary assessment (EMA). Due to different aspects of these EMA-data, the application of standard time series models to these data is not straightforward.
EMA-data are usually collected at relatively few, say 25-50, time points. Common autoregression-parameter estimators have been developed based upon asymptotic arguments, which clearly don’t work for such short series. In the first half of my presentation, I will focus on the consequences of this for the estimation of parameters in the model, both for single case studies as for multilevel/hierarchical designs.
In the second half of the talk, I will discuss the Bayesian dynamic model (BDM). This Bayesian extension of state space models offers a versatile class of modelling approaches very useful for typical psychological data. I will outline the benefits of this class of models and showcase it using two examples from psychological practice. The first example stems from emotion research, in the second we compare different treatments for patients suffering from anxiety attacks.
Joris Pinkse (Penn State University )
17 November 2016
Jose Montiel Olea (New York University )
1 December 2016
Yanqin Fan (University of Washington)
8 December 2016
For more information