ä

Venue: H10-31
Time: 16:00

Marcin Jaskowski (Econometric Institute, EUR)

29 January 2015

First-Passage-Time in Discrete Time

Abstract:
We present a semi-closed form method of computing a first-passage-time (FPT) density for discrete time Markov stochastic processes. Our method provides exact solution for one-dimensional processes and approximations for higher dimensions. In particular, we show how to find an exact form of FPT for AR(1), and an approximate FPT for VAR(1). The method is valid for any type of innovation process if multi-period transition probabilities can be computed. It is intuitively straightforward, avoids the use of complex mathematical tools and therefore it is suitable for econometric applications. For instance, our method can be applied to form structural models of duration without the need to invoke simplistic continuous-time Brownian motion models. Finally, the method that we propose can be efficiently implemented by parallelization of computing tasks.

Authors: Marcin Jaskowski and Dick van Dijk

Julie Josse (Agrocampus Ouest, Rennes)

12 February 2015

A flexible framework for regularized low-rank matrix estimation

Abstract

Low-rank matrix estimation plays a key role in many scientific and engineering tasks including collaborative filtering and image denoising.
Low-rank procedures are often motivated by the statistical model where we observe a noisy matrix drawn from some distribution with expectation assumed to have a low-rank representation. The statistical goal is to try to recover the signal from the noisy data. Classical approaches are centered around singular-value decomposition algorithms.
Although the truncated singular value decomposition has been extensively used and studied, the estimator is found to be noisy and its performance can be improved by regularization.
Methods based on singular-value shrinkage have achieved considerable empirical success and also have provable optimality properties in the Gaussian noise model (Gavish & Donoho, 2014).  In this presentation, we propose a new framework for regularized low-rank estimation that does not start from the singular-value shrinkage point of view. Our approach is motivated by a simple parametric boostrap idea. In the simplest case of isotropic Gaussian noise, we end up with a new singular-value shrinkage estimator whereas for non-isotropic noise models, our procedure yields new estimators that perform well in experiments.

Jesus Gonzalo Muñoz (Universitad Carlos III de Madrid)

26 February 2015

TRENDs in Distributional Characteristics: Existence of Global Warming

Abstract
We define Global Warming (GW) as the existence of a trend in a given characteristic of the temperature distribution, not only in the mean. These new trends are characterized following the suggestions in Granger and White (2011). In order to make the new concept operational we propose to estimate the distributional characteristics (Ci) by realized quantiles (RQ). In this way, they are converted into time series objects and we can apply all the time series analysis tools. The paper proposes a robust “low-cost econometrics” test for the existence of an unknown trend in a given characteristic (null hypothesis is no-trend). Asymptotic properties of the test as well as its finite sample performance are provided. For those characteristics where the null is rejected, we model and estimate their trend behavior. Different trend models are specified (via general to particular significance testing and model selection criteria) and a forecasting competition is run. Forecasting evaluation tests are used to choose the best trend model for Cit. With these trend models, we are able to produce forecasts of the trending behavior of temperature distribution and not only of the mean as it is the standard procedure in this literature. This paper applies this novelty approach to study the existence of GW (regional warming) in the center of UK, 1772-2012 thermometer data, to conclude that GW is more on the lower quartiles of the temperature distribution than in the upper part. These results are also maintained in other northern hemisphere places like Stockholm, Cadiz and Milan.

Co-Author: Lola Gadea (U. Zaragoza)

Bent Nielsen (University of Oxford)

3 March 2015

Outlier detection algorithms for least squares time series

Abstract
We review recent asymptotic results on some robust methods for multiple regression. The regressors include stationary and non-stationary time series as well as polynomial terms. The methods include the Huber-skip M-estimator, 1-step Huber-skip M-estimators, in particular the Impulse Indicator Saturation, iterated 1-step Huber-skip M-estimators and the Forward Search. These methods classify observations as outliers or not. From the asymptotic results we establish a new asymptotic theory for the gauge of these methods, which is the expected frequency of falsely detected outliers. The asymptotic theory involves normal distribution results and Poisson distribution results. The theory is applied to a time series data set.

Authors Johansen, S. and Nielsen, B. (2014) . Download:Nuffield Discussion Paper 2014-W04. Method implemented in R-package ForwardSearch

Roman Liesenfield (Universität Köln)

5 March 2015

Likelihood Evaluation of High-Dimensional Spatial Latent Gaussian Models with Non-Gaussian Response Variables

Abstract
We propose a generic algorithm for numerically accurate likelihood evaluation of a broad class of spatial models characterized by a high-dimensional latent Gaussian process and non-Gaussian response variables. The class of models under consideration includes specifications for discrete choices, event counts and limited dependent variables (truncation, censoring, and sample selection) among others. Our algorithm relies upon a novel implementation of Efficient Importance Sampling (EIS) specifically designed to exploit typical sparsity of high-dimensional spatial precision (or covariance) matrices. It is numerically very accurate and computationally feasible even for very high-dimensional latent processes. Thus Maximum Likelihood (ML) estimation of high-dimensional non-Gaussian spatial models, hitherto considered to be computationally prohibitive, becomes feasible. We illustrate our approach with ML estimation of a spatial probit for US presidential voting decisions and spatial count data models (Poisson and Negbin) for firm location choices.

Co-Authors: Jean-Francois Richard, Jan Vogler

Asgar Lunde (Aarhus University)

26 March 2015

Volatility and Firm Specific News Arrival

Abstract
Starting with the advent of the event study methodology, the puzzle of how public information relates to changes in asset prices has unraveled gradually. Using a sample of 28 large US companies, we investigate how more than 3 million firm specific news items are related to firm specific stock return volatility. We specify a return generating process in conformance with the mixture of distributions hypothesis, where stock return volatility has a public and a private information processing component. Following public information arrival, prices incorporate public information contemporaneously while private processing of public information generates private information that is incorporated sequentially. We refer to this model as the information processing hypothesis of return volatility and test it using time series regression. Our results are evidence that public information arrival is related to increases in volatility and volatility clustering. Even so, clustering in public information does not fully explain volatility clustering. Instead, the presence of significant lagged public information effects suggest private information, generated following the arrival of public information, plays an important role. Including indicators of public information arrival explains an incremental 5 to 20 percent of variation in the changes of firm specific return volatility. Contrary to prior financial information research, our investigation favors the view that return volatility is related to public information arrival.

Co-Authors: Robert F. Engle (SternSchool of Business) and Martin Klint (CREATES)

Paulo Rodrigues (Bank of Portugal)

16 April 2015

A New Regression-Based Tail Index Estimator

Abstract
In this paper, a new regression-based approach for the estimation of the tail index of heavy-tailed distributions is introduced. Comparatively to many procedures currently available in the literature, our method does not involve order statistics theory, and can potentially be applied in a very general context. The procedure is in line with approaches used in experimental data analysis with fixed explanatory variables. There are several important features of our procedure worth highlighting. First, it provides a bias reduction over available regression-based methods and a fortiori over standard least-squares based estimators of the tail index α. Second, it is relatively resilient to the choice of the tail length used in the estimation of α, and particularly so when compared to the widely used Hill estimator. Third, when the effect of the slowly varying part of the Pareto type model (the so called second order behavior of the Taylor expansion) vanishes slowly our estimator continues to perform satisfactorily, whereas the Hill estimator rapidly deteriorates. 

Dante Amengual (CEMFI)

23 April 2015

Is a normal copula the right copula?

Abstract
Nowadays copulas are extensively used in economic and finance applications, with the Gaussian copula being very popular despite ruling out non-linear dependence, particularly in the lower tail. We derive computationally simple and intuitive expressions for score tests of Gaussian copulas against Generalised Hyperbolic alternatives, which include the symmetric and asymmetric Student t, and Hermite polynomial expansions. We decompose our tests into third and fourth moment analogues, and obtain more powerful one-sided Kuhn-Tucker versions that are equivalent to the Likelihood Ratio test, whose asymptotic distribution we provide. We conduct detailed Monte Carlo exercises to study our proposed tests in finite samples. 

Co-auteur: Enrique Sentana

Xu Cheng (University of Pennsylvania)

7 May 2015

Shrinkage Estimation of High-Dimensional Factor Models with Structural Instabilities

Abstract
In large-scale panel data models with latent factors the number of factors and their loadings may change over time. This paper proposes an adaptive group-LASSO estimator that consistently determines the numbers of pre- and post-break factors and the stability of factor loadings. The data-dependent LASSO penalty is customized to account for unobserved factors and an unknown break date. A novel feature of our estimator is its robustness to unknown break dates. Existing procedures either overestimate the number of factors by neglecting the breaks or require known break dates for a subsample analysis. In an empirical application, we study the change in factor loadings and the emergence of new factors during the Great Recession.

Co-auteur: Zhipeng Liao, Frank Schorfheide

Vladimir Tikhomirov (Lomonosov Moskow State University)

21 May 2015 (12:00-13:00, location H15-29)

Survey of the theory of extremal problems

Abstract
The theory of extremal problems is designed to create methods and principles for solution or investigation of concrete extremal problems. In this talk the theory and its applications will be considered from general positions as a unique branch of mathematics.
In the lecture the following questions will be touched:
1.  Necessary conditions
2. Stability theory and sufficient conditions
3. Theory of existence
4. Algorithms
5. Application of the theory to natural science, economics, technology and mathematics 1

Robin Lumsdaine (ESE)

21 May 2015

The Intrafirm Complexity of Systemically Important Financial Institutions

Abstract
In November, 2011, the Financial Stability Board, in collaboration with the International
Monetary Fund, published a list of 29 “systemically important financial institutions”
(SIFIs). This designation reflects a concern that the failure of any one of them could have
dramatic negative consequences for the global economy and is based on “their size,
complexity, and systemic interconnectedness”. While the characteristics of “size” and
“systemic interconnectedness” have been the subject of a good deal of quantitative
analysis, less attention has been paid to measures of a firm’s “complexity.” In this paper
we take on the challenges of measuring the complexity of a financial institution by
exploring the use of the structure of an individual firm’s control hierarchy as a proxy for
institutional complexity. The control hierarchy is a network representation of the institution
and its subsidiaries. We show that this mathematical representation (and various
associated metrics) provides a consistent way to compare the complexity of firms with
often very disparate business models and as such may provide the foundation for
determining a SIFI designation. By quantifying the level of complexity of a firm, our
approach also may prove useful should firms need to reduce their level of complexity either
in response to business or regulatory needs. Using a data set containing the control
hierarchies of many of the designated SIFIs, we find that between 2011 and 2013, these
firms have decreased their level of complexity, perhaps in response to regulatory
requirements.

Paper is available at www.ssrn.com/abstract=2604166 

Co-authors:   Daniel N. Rockmore (Dartmouth College), Nick Foti (University of Washington), Gregory Leibon (Dartmouth College), J. Doyne Farmer (Oxford University)

Chen Zhou (Econometric Institute)

28 May 2015

Statistics of heteroskedastic extremes

Abstract
We extend classical extreme value theory to non-identically distributed observations. When the distribution tails are proportional much of extreme value statistics remains valid. The proportionality function for the tails can be estimated nonparametrically along with the (common) extreme value index. In case of a positive extreme value index, joint asymptotic normality of both estimators is shown; they are asymptotically independent. We also establish asymptotic normality of a forecasted high quantile and develop tests for the proportionality function and for the validity of the model. We show through simulations the good performance of the procedures and also present an application to stock market returns. A main tool is the weak convergence of a weighted sequential tail empirical process.

Vladimir Yu. Protassov (Moskow State University)

4 june 2015

Leontief model: how to make the economics productive

Abstract
The Leontief input-output model describes the inter-industry relationships with linear algebraic tools. The productivity of economics is expressed in terms of the Perron eigenvalue of the consumption matrix. Suppose for each sector of economics, we have several types of technologies to organize production. We need to make a choice for each sector to make the whole economics productive. Mathematically, this is equivalent to minimizing  the spectral radius over a special set of nonnegative matrices. This problem is notoriuosly hard. Nevertheless, in most cases it can be efficiently solved by the so-called spectral simplex method. Some interesting relations of this problem to other areas of mathematics such as combinatorics and functional analysis will also be discussed. 

Jan Magnus (Vrije Universiteit)

11 June 2015

Weighted-average least squares

Abstract
Model averaging has become a popular method of estimation, following increasing evidence that model selection and estimation should be treated as one joint procedure. Weighted-average least squares (WALS) is a recent model-average approach, which takes an intermediate position between frequentist and Bayesian methods, allows a credible treatment of ignorance, and is extremely fast to compute. We review the theory of WALS and discuss extensions and applications.

Erik Hennink (ORTEC)

18 June 2015

Long-term mean credit spread curves

Abstract
In this research, the long term mean assumptions for credit spread curves for different ratings are determined. This is done using a model which converts historical cumulative default probabilities to risk-neutral ones and a constant though rating dependent assumption for the liquidity risk premium. The shapes of the constructed credit curves are consistent with the theoretical shapes of the credit spread curves. In addition, it is found that the model-implied spread curves are generally in line with the historical ones.

Marcelo Medeiros (Pontificia Universidade Católica)

9 July 2015

An Articial Counterfactual Approach for Aggregate Data

Abstract
We consider a new method to conduct counterfactual analysis with aggregate data when a ``treated'' unit suffers a shock or an intervention, such as a policy change. The proposed approach is based on the construction of an artificial counterfactual from a pool of ``untreated'' peers, and is inspired by different branches of the literature such as: the Synthetic Control method, the Global Vector Autoregressive models, the econometrics of structural breaks, and the counterfactual analysis based on macro-econometric and panel data models. We derive an asymptotic Gaussian estimator for the average effect of the intervention and present a collection of companion hypothesis tests. We also discuss finite sample properties and conduct a detailed Monte Carlo experiment.

Organizers

Andreas Alfons
Room: H11-21
Phone: 010-408288
Email: alfons@remove-this.ese.eur.nl

and

Wendun Wang
Room: H11-26,
Phone: 010-4088756
Email: wang@ese.eur.nl

For more information:

Anneke Kop
Room: H11-04
Phone: 010-4081259
Email: eb-secr@remove-this.ese.eur.nl

 

The Econometric Institute Seminars are supported by: