Seminars in Econometrics 2017

19 January 2017 **CANCELED**

The auto- and cross-distance correlation functions of a multivariate time series and their sample versions

Thomas Mikosch (University of Copenhagen)

Feuerverger (1993) and Székely, Rizzo and Bakirov (2007) introduced the notion of distance covariance/correlation as a measure of independence/dependence  between two vectors of arbitrary dimension and provided limit theory for the sample versions based on an i.i.d. sequence. The main idea is to use characteristic functions to test for independence between vectors, using  the standard property that the characteristic function of two independent vectors factorizes. Distance covariance is a weighted version of the squared distance between the joint characteristic function of the vectors and the product of their marginal characteristic functions.

Similar ideas have been used in the literature for various purposes: goodness-of-fit tests, change point detection, testing for independence of variables, etc; see work by Meintanis, Huškova, and many others. In contrast to Székely et al. who use a weight function which is infinite on the axes, the latter authors choose probability density weights. Z. Zhou (2012) extended distance correlation to time series models for testing dependence/independence in a time series at a given lag. He assumed a “ physical dependence measure''. In our work we consider the distance covariance/correlation for general weight measures, finite or infinite on the axes or at the origin.

These include the choice of Székely et al., probability and various Lévy measures. The sample versions of distance covariance/correlation are obtained by replacing the characteristic functions by their sample versions. We show consistency under ergodicity and weak convergence to an unfamiliar limit distribution of the scaled auto- and cross-distance covariance/correlation functions under strong mixing.

We also study the auto-distance correlation function of the residual process of an autoregressive process. The limit theory is distinct from the corresponding theory of an i.i.d. noise process. We illustrate the theory for simulated and real data examples.

16 March 2017

Efficient Estimation with a Finite Number of Simulation Draws per Observation

Kirill Evdokimov (Princeton University)

In microeconometric applications, simulation methods such as the Method of Simulated Moments (MSM) and Indirect Inference (II) typically provide consistent and asymptotically normal estimators when a finite number of simulation draws per observation is used. However, these estimators are inefficient, unless the number of simulation draws per observation is large (theoretically, infinite).

This paper argues that this inefficiency can be attributed to the standard estimators ignoring important information about the estimation problem. The paper proves that asymptotically efficient estimation is possible with as little as one simulation draw per observation, as long as the estimators make proper use of the available information. Moreover, such efficient estimators can be taken to be simple modifications of the standard MSM and II estimators with nearly no additional computational or programming burden.

In practice, the possibility of using just one simulation draw per observation could significantly reduce the estimation time for models, in which evaluation at each simulation draw and parameter value is time-consuming. This in particular includes models that require numerical computation of an optimal choice, decision, or equilibrium for each simulation draw. Such models are widespread in empirical microeconomics, including industrial organization and labor economics.

To establish the properties of the new estimators, the paper develops an asymptotic theory of estimation and inference in (possibly non-smooth) moment condition models with a large number of moments. This asymptotic theory covers both the extremum and quasi-Bayesian estimators.

6 April 2017

A Uniform Vuong Test for Semi/Nonparametric Models

Zhipeng Liao (UCLA)

This paper proposes a new Vuong test for the statistical comparison of semi/non-parametric models based on a general quasi-likelihood ratio criterion. An important feature of the new test is its uniformly exact asymptotic size in the overlapping nonnested case, as well as in the easier nested and strictly nonnested cases.

The uniform size control is achieved without using pretesting, sample-splitting, or simulated critical values. We also show that the test has nontrivial power against all ƴn-local alternatives and against some local alternatives that converge to the null faster than ƴn. Finally, we provide a framework for conducting uniformly valid post Vuong test inference for model parameters.

The finite sample performance of the uniform test and that of the post Vuong test inference procedure are illustrated in a mean-regression example by Monte Carlo.

Co-Author Xiaoxia Shi

20 April 2017

Monetary Policy Uncertainty and Economic Fluctuations

Drew Creal (University of Chicago)

We investigate the relationship between uncertainty about monetary policy and its transmission mechanism, and economic fluctuations. We propose a new term structure model where the second moments of macroeconomic variables and yields can have a first-order effect on their dynamics.

The data favors a model with two unspanned volatility factors that capture uncertainty about monetary policy and the term premium. Uncertainty contributes negatively to economic activity.

Two dimensions of uncertainty react in opposite directions to a shock to the real economy, and the response of inflation to uncertainty shocks vary across different historical episodes.

4 May 2017

Identifying Latent Grouped Structures in Nonlinear Panels

Liangjun Su (Singapore Management University)

We propose a procedure to identify latent group structures in nonlinear panel data models where some regression coefficients are heterogeneous across groups but homogenous within a group and the group number and membership are unknown. To identify the group structures, we consider the order statistics for the preliminary unconstrained consistent estimates of the regression coefficients and translate the problem of classification into the problem of breaks detection.

Then we extend the sequential binary segmentation algorithm of Bai (1997) for breaks detection from the time series setup to the panel data framework. We demonstrate that our method is able to identify the true latent group structures with probability approaching one and the post-classification estimators are oracally efficient. In addition, our method has the greatest advantage of easy implementation in comparison with some competitive methods in the literature, which is desirable especially for nonlinear panel data models. To improve the finite sample performance of our method, we also consider an alternative version based on the spectral decomposition of certain estimated matrix and link our group identification issue to the community detection problem in the network literature.

Simulations show that our method has good finite sample performance. We apply our method to explore how individuals' portfolio choices respond to their financial status and other characteristics using the Netherlands household panel data from year 1993 to 2015, and find two latent groups.

30 May 2017

Fixed-Effect Regressions on Network Data

Martin Weidner (UCL)

This paper studies inference on fixed effects in a linear regression model estimated from network data. An important special case of our setup is the two-way regression model, which is a workhorse method in the analysis of matched data sets. Networks are typically quite sparse and it is difficult to see how the data carry information about certain parameters.

We derive bounds on the variance of the fixed-effect estimator that uncover the importance of the structure of the network. These bounds depend on the smallest non-zero eigenvalue of the (normalized) Laplacian of the network and on the degree structure of the network.

The Laplacian is a matrix that describes the network and its smallest non-zero eigenvalue is a measure of  connectivity, with smaller values indicating less-connected networks. These bounds yield conditions for consistent estimation and convergence rates, and allow to evaluate the accuracy of first-order approximations to the variance of the fixed-effect estimator.

The bounds are also used to assess the bias and variance of estimators of moments of the fixed effects.

Co author: Koen Jochmans

Tuesday 3 October 2017

Parallel Incremental Optimization Algorithm for Solving Partially Separable Problems in Machine Learning

Ilker Birbil (Sabanci University)

Consider a recommendation problem, where multiple firms are willing to cooperate to improve their rating predictions. However, the firms insists on finding a machine learning approach, which guarantees that their data remain in their own servers. To solve this problem, I will introduce our recently proposed approach HAMSI (Hessian Approximated Multiple Subsets Iteration).

HAMSI is a provably convergent, second order incremental algorithm for solving large-scale partially separable optimization problems. The algorithm is based on a local quadratic approximation, and hence, allows incorporating curvature information to speed-up the convergence. HAMSI is inherently parallel and it scales nicely with the number of processors. I will conclude my talk with several implementation details and our numerical results on a set of matrix factorization problems.

12 October 2017 ****Cancelled****

Title: TBA

Arun Chandrasekha (Stanford University)

No abstract

24 October 2017

Time Series Copulas for Heteroskedastic Data

Michael Smith (Melbourne Business School)

We propose parametric copulas that capture serial dependence in stationary heteroskedastic time series. We suggest copulas for first order Markov series, and then extend them to higher orders and multivariate series. We derive the copula of a volatility proxy, based on which we propose new measures of volatility dependence, including co-movement and spillover in multivariate series. In general, these depend upon the marginal distributions of the series.

Using exchange rate returns, we show that the resulting copula models can capture their marginal distributions more accurately than univariate and multivariate GARCH models, and produce more accurate value at risk forecasts. Last, we outline an alternative approach to solving this problem based on extracting the “implicit” or inversion copula of existing parametric time series copula models.

27 October 2017

Quantile Spectral Analysis for Locally Stationary Time Series

Marc Hallin (Université libre de Bruxelles)

Classical spectral methods are subject to two fundamental limitations:  they only can account for covariance-related serial dependencies, and they require second-order stationarity.  Much attention has been devoted lately to {\em quantile} (copula-based) {\em spectral methods} that go beyond traditional covariance-based serial dependence features. At the same time, covariance-based methods relaxing stationarity  into  much weaker {\it local stationarity} conditions have been developed for a variety of time-series models.

Here, we are combining those two approaches by proposing copula-based spectral methods for locally stationary processes. We therefore introduce a time-varying version of the copula spectra that have been  recently proposed in the literature, along with a suitable local lag-window estimator. We propose a new definition of local {\it strict} stationarity that allows  us to handle completely general non-linear processes without any moment assumptions, thus accommodating our copula-based concepts and methods.

We establish a central limit theorem for the new estimators, and illustrate the power of the proposed methodology by means of a simulation study. Moreover,  real-data applications demonstrate that the new approach detects   important variations in  serial dependence structures both across time and across quantiles. Such variations remain completely undetected, and are actually undetectable, via classical covariance-based spectral methods. 

Based on joint work with Stefan Birr (Ruhr-Universit\" at Bochum), Holger Dette (Ruhr-Universit\" at Bochum), Tobias Kley (London School of Economics) and Stanislav Volgushev (University of Toronto).

(The main reference (JRSSB 2017 DOI: 10.1111/rssb.12231) is available online:

16 November 2017

Time-Varying Vector Autoregressive Models with Structural Dynamic Factors

Julia Schaumburg (Vrije Universiteit Amsterdam)

We suggest a simple methodology to estimate time-varying parameter vector autoregressive (VAR) models. In contrast to the widely used Bayesian approach, our approach is based on combining a dynamic factor model for the VAR coefficient matrices and a score-driven model for the time-varying variances.

Our algorithm is robust and fast, while being easy to implement. In a small simulation study, we demonstrate the good performance of the method. Furthermore, using the empirical data set on U.S. macroeconomic and financial variables that is also used in Prieto et al. (2016), we show that our approach is promising in modeling time-varying macro-financial linkages.

Joint work with Paolo Gorgi and Siem Jan Koopman

23 November 2017

Credit Conditions and the Effects of Economic Shocks: Amplification and Asymmetries

Ana Galvao (Warwick University)

In this paper we address three empirical questions related to credit conditions. First, do they change the dynamic interactions of economic variables? Second, do they enlarge the effects of economic shocks? Third, do they generate asymmetries in the effects of economic shocks? To answer these questions, we introduce endogenous regime switching in the parameters of a large Multivariate Autoregressive Index (MAI) model, where all variables react to a set of observable common factors.

We develop Bayesian estimation methods and show how to compute responses to common structural shocks. We find that credit conditions do act as a trigger variable for regime changes. Moreover, demand and supply shocks are amplified when they hit the economy during periods of credit stress. Finally, good shocks seem to have more positive effects during stress time, in particular on unemployment.

(with A. Carriero and M. Marcellino)

30 November 2017

Keeping up with peers in India: A new social interactions model of perceived needs

Arthur Lewbel (Boston Colleage)

We propose a new nonlinear model of social interactions. The model allows point identification of peer effects as a function of group means, even with group level fixed effects. The model is robust to measurement problems resulting from only observing a small number of members of each group, and therefore can be estimated using standard survey datasets. We apply our method to a national consumer expenditure survey dataset from India.

We find that each additional rupee spent by one's peer group increases one's own perceived needs by roughly 0.5 rupees. This implies that if I and my peers each increase spending by 1 rupee, that has the same effect on my utility as if I alone increased spending by only 0.5 rupees. Our estimates have important policy implications, e.g., we show potentially considerable welfare gains from replacing government transfers of private goods with the provision of public goods.

7 December 2017

Topics in time varying coefficient models

George Kapetanios (Queen Mary University of London)

This presentation discusses recent work on time varying coefficient models. The first parts discusses a test to distinguish between stationary and persistent volatility models. We find significant empirical evidence in favour of persistent volatility and against standard volatility models such as GARCH.

The second part discusses a new method for estimating time varying models based on the  principles underlying the Hodrick Prescott filter. 

12 December 2017

Title: TBA

Emilio Porcu

No abstract

14 December 2017

Endogeneity in Semiparametric Threshold Regression

Andros Kourtellos (Cyprus University)

In this paper, we investigate semiparametric threshold regression models with endogenous threshold variables based on a nonparametric control function approach. Using a series approximation, we propose a two-step estimation method for the threshold parameter. For the regression coefficients, we consider least-squares estimation in the case of exogenous regressors and two-stage least-squares estimation in the case of endogenous regressors.

We show that our estimators are consistent and derive their asymptotic distribution for weakly dependent data. Furthermore, we propose a test for the endogeneity of the threshold variable, which is valid regardless of whether the threshold effect is zero or not. Finally, we assess the performance of our methods using a Monte Carlo simulation.