Spring 2009

Venue H10-31, time 15:30h.

Feb. 19

Mark Podolskij (ETH Zürich ) 

 

Realised quantile-based estimation of the integrated variance.

Abstract
In this talk we present a new estimator (the realised quantile-based estimator) of the integrated variance (IV). The main goal of our approach is the construction of a highly efficient estimator that is robust to jumps and outliers in the price process. This is realised in the following manner: we use the (symmetric) squared  return quantiles over small subintervals to estimate IV. Since a jump (if there is any on the subinterval) corresponds to the largest return quantile we take this one out of our computation. This leads to jump robustness. Moreover, we demonstrate how our method can be applied to models with microstructure noise.
Finally, we present some empirical results to illustrate the performance of our estimator.

http://www.math.ethz.ch/~podolski/

Feb. 26

Michiel de Pooter (Federal Reserve Board)

 

Testing for Changes in Volatility in Heteroskedastic Time Series - A Further Examination.

Abstract
We consider tests for sudden changes in the unconditional volatility of conditionally heteroskedastic time series based on cumulative sums of squares. When applied to the original series these tests suffer from severe size distortions, where the correct null hypothesis of no volatility change is rejected much too frequently. Applying the tests to standardized residuals from an estimated GARCH model results in good size and reasonable power properties when testing for a single break in the variance. The tests also appear to be robust to different types of misspecification. An iterative algorithm is designed to test sequentially for the presence of multiple changes in volatility. An application to emerging markets stock returns clearly illustrates the properties of the different test statistics.

March 2

Robin Lumsdaine (American University)

 

Venue T3-42, from 3.30pm to 5pm

The Global Inflation-linked Bond Market: Assessing Conventional Wisdoms

Abstract
Last September, the Dutch Ministry of Finance considered and decided against a proposal to issue inflation-linked bonds.  Globally, the inflation-linked bond (ILB) market has grown dramatically, more than doubling in the last five years, with many countries now offering both real and nominal sovereign debt.  In addition to the increasing size of most markets, the last five years have witnessed greater coverage along the maturity spectrum, so that in addition to well-developed nominal curves, many countries also have nearly-complete real yield curves to inform policy decisions.   Despite the dramatic growth in this market and a significant amount of market commentary related to it, it is still a fairly concentrated market relative to the nominal sovereign markets, both in terms of participants and specialist knowledge.  There is relatively little historical experience with these markets through a variety of economic conditions and the complexity of the market leaves many interesting topics for investigation.  This paper considers a variety of practical issues in assessing inflation-linked bonds, particularly in light of the current financial crisis.

March 5

José Fernando Vera (University of Granada)

 

Latent Class Multidimensional Scaling Models

Abstract

The principal aim of Multidimensional Scaling (MDS) is the representation of a set of objects, usually in an Euclidean space of low dimension, by a configuration matrix, in a way that preserves given proximity information between objects. Even though the deterministic approach has been the most widely employed estimation procedure in MDS, if the resulting error in the approximation of distances to dissimilarities, is considered of random nature instead of deterministic as in the least squares method, a    probabilistic model for MDS can be formulated assuming a distribution of probability for the dissimilarities. For two-way one-mode continuous rating dissimilarity data, a cluster-MDS model is proposed in this paper. The model aims at partitioning the objects into classes and simultaneously representing the cluster centres in a low-dimensional space. Under the normal distribution assumption, a latent class model is developed in terms of the set of dissimilarities in a maximum likelihood framework.

Monte Carlo optimization plays a fundamental role for the parameter estimation. For each trial classification of the objects, the derived partition into blocks of the original dissimilarity matrix is found. Then, a configuration of cluster centres in a low dimensional space and the cluster dispersions are conditionally estimated, by means of a Simulated Annealing (SA) estimation procedure. The proposed heuristic always first assigns the objects to the clusters, and then evaluates the log-likelihood in the derived dissimilarity partition. Hence, the algorithm ensures that not only at the end, but every time along the SA heuristic, the relation in the object space between the objects and latent classes, is preserved. Thus, at the end of the SA procedure, the objects are assigned to the class they are most likely to belong to in conjunction with the configuration of optimal cluster centres. A model selection strategy is used to test both the number of latent classes as well as the dimensionality of the problem.

Paper (pdf-file)

http://www.ugr.es/~estadis/1/VeraF.html

March 10
**CANCELLED**

Eric Bradlow (University of Pennsylvania)***CANCELLED***

 

Venue: T3-02

Bayesian Estimation of Retail Demand Under Partially Observed Out-of-Stocks

Abstract
We develop a structural demand model that captures the effect of out-of-stocks on customer choice. Our estimation method uses store-level data on sales and partial information on product availability. Our model allows for flexible substitution patterns which are based on utility maximization principles and can accommodate categorical and continuous product characteristics. The methodology can be applied to data from multiple markets and in categories with a relatively large number of alternatives, slow moving products and frequent out-of-stocks. We estimate our model using sales data from multiple stores for twenty four items in the shampoo product category. In addition, we illustrate how the model can be used to assist the decisions of a retailer in two ways.
First, we show how to quantify the lost sales induced by out-of-stock products. Second, we provide insights on the financial consequences of out-of-stocks and suggest simple policies that can be used to help mitigate the negative economic impact of out-of-stocks.

Paper (pdf-file)

March 18

Gerdie Everaert (Ghent University)

 

Venue: H09-02

Using Backward Means to Eliminate Individual Effects from Dynamic Panels

Abstract:
The within-groups estimator is inconsistent in dynamic panels with fixed T since the sample mean used to eliminate the individual effects from the lagged dependent variable is correlated with the error term. This paper suggests to eliminate individual effects from an AR(1) panel using backward means as an alternative to sample means. Using orthogonal deviations of the lagged dependent variable from its backward mean yields an estimator that is still inconsistent for fixed T but the inconsistency is shown to be negligibly small. A Monte Carlo simulation shows that this alternative estimator has superior small sample properties compared to conventional fixed effects, bias-corrected fixed effects and GMM estimators. Interestingly, it is also consistent for fixed T in the specific cases where (i) T = 2, (ii) the AR parameter is 0 or 1, (iii) the variance of the individual effects is zero.

Het paper is beschikbaar op http://www.feb.ugent.be/SocEco/sherppa/members/gerdie/documents/paper2.pdf

March 26

Gael Martin (Monash University)

 

Modeling and Predicting Volatility and its Risk Premium: a Bayesian Non-Gaussian State Space Approach

Abstract:
The object of this paper is to model and forecast both objective volatility and its associated risk premium using a non-Gaussian state space approach. Option and spot market information on the unobserved volatility process is captured via non-parametric, `model-free' measures of option-implied and spot price-based volatility, with the two measures used to define a bivariate observation equation in the state space model. The risk premium parameter is specified as a conditionally deterministic dynamic process, driven by past `observations' on the volatility risk premium. The inferential approach adopted is Bayesian, implemented via a Markov chain Monte Carlo (MCMC) algorithm that caters for the non-linearities in the model and for the multi-move sampling of the latent volatilities. The simulation output is used to estimate predictive distributions for objective volatility, the instantaneous risk premium and the aggregate risk premium associated with a one month option maturity. Linking the volatility risk premium parameter to the risk aversion parameter in a representative agent model, we also produce forecasts of the relative risk aversion of a representative investor.

Paper is niet beschikbaar voor publicatie.

April 2

Thierry Denoeux (Universitéd de Technologie de Compiègne)

 

Theory of belief functions : Application to classification and clustering

Abstract
The theory of belief function (also referred to as Dempster-Shafer theory)  is a generalization of probability theory  allowing for the representation of uncertain and imprecise knowledge. Introduced in the context of statistical inference by A. P. Dempster in the 1960’s, it was developed by G. Shafer in the 1970’s as a general framework for combining evidence and reasoning under uncertainty. After a general introduction to this  theory, we will focus on its application to data classification and clustering. As will be shown, Dempster-Shafer theory makes it possible to handle uncertain and imprecise observations, such as partially supervised data  in classification tasks. The language of belief functions also allows us to generate rich descriptions of the data (using, e.g., the new concept of credal partition in clustering problems), and to combine efficiently information coming from several sources (such as statistical data and expert knowledge).

http://www.hds.utc.fr/~tdenoeux/perso/doku.php

April 16

Arthur Lewbel (Boston College)

 

Returns to Lying When the Truth is Unobserved

Abstract
Consider an observed binary regressor D and an unobserved binary variable D*, both of which affect some other variable Y. This paper considers nonparametric identification and estimation of the effect of D on Y, conditioning on D* = 0. For example, suppose Y is a person's wage, the unobserved D* indicates if the person has been to college, and the observed D indicates whether the individual claims to have been to college. This paper then identifies and estimates the difference in average wages between those who falsely claim college experience versus those who tell the truth about not having college. We estimate this average returns to lying to be about 6% to 20%. Nonparametric identification without observing D* is obtained either by observing a variable V that is roughly analogous to an instrument for ordinary measurement error, or by imposing restrictions on model error moments.

Paper (pdf)

 April 21

Stanislav Stakhovych (Groningen University)

  Modeling Geo-dependent Attitudes Using Bayesian Spatial Factor Analysis
Abstract:
Spatial variation in attitudes plays an important role in decisions on geographical marketing efforts, such as targeting of direct mail campaigns and scheduling of sales representatives. Similarly, for financial service companies, it is important to schedule their financial planners across servable geographical regions based on the spatial heterogeneity in consumer preferences and attitudes towards financial products. However, studying these attitudes is difficult because they are latent in nature, often spatially correlated, and data might be sparse for some regions. To address these challenges, we propose a heterogeneous spatial factor analytical model which allows extracting spatially correlated latent factors. The model is implemented in a Bayesian framework dealing with the sparse data problem by regions borrowing information from neighboring regions. Next, we propose a procedure for spatial scheduling based on the model results. Model performance is evaluated on artificial data. In an empirical study on consumer attitudes in the financial domain, we demonstrate model applicability. In particular, we show that our approach yields important insights on spatially-varying attitudes, which can be used for improved assigning of financial planners to regions. Finally, we increase managerial relevance by discussing additional marketing decisions that can be supported by this approach and discuss areas for future research.

JEL classification: C11, C12, C13, C15, C21, C52, G17
Key words: attitudes, financial planning, sales force optimization, spatial econometrics, Bayesian econometrics, factor analysis

April 22

Eelco van Asperen (Erasmus University Rotterdam)

 

Time: 16:00h

The title is: Flexibility in Port Selection: a Quantitative Approach using Floating Stocks

The abstract

Ports provide a number of logistical choices concerning storage, onward transport, and postponement. We investigate the flexibility for rerouting cargo en-route offered by ports with a central location with respect to the hinterland. This flexibility is investigated using an illustrative case in which a number of alternative strategies are evaluated by means of simulation. Detailed cost data was used for the illustrative case. The combination of a simulation model and detailed cost data allows us to quantify the value of the rerouting flexibility.
A combination of using regional distribution centers and a European Distribution Center results in the lowest cost per container.

April 23

 Jie Yu (KU Leuven)

  Efficient experimental designs for choice-based conjoint analysis

Abstract:
In this study, we propose an e±cient individually adapted sequential Bayesian approach for constructing conjoint choice experiments. It uses Bayesian updating, a Bayesian analysis and a Bayesian design criterion for generating a conjoint-choice design for each individual respondent based on previous answers of that particular respondent. The proposed design approach is compared with two non-adaptive design approaches, the aggregate-customization design and the (nearly) orthogonal design approaches, under various degrees of response accuracy and consumer heterogeneity. A simulation study shows that the individually adapted sequential Bayesian conjoint-choice designs perform better than the benchmark approaches in all scenarios that we studied. In the presence of high consumer heterogeneity, the improvements achieved by the new method in terms of precision of estimation and accuracy of prediction are impressive. A key result of our simulations is that the new sequential approach to conjoint-choice design yields substantially better information about individual-level preferences than existing approaches. The new method also performs well when the response accuracy is low, in contrast with the recently proposed adaptive polyhedral choice-based question design approach.

May 6

Andreas Pick (Nederlandse Bank)

  Forecasting Random Walks under Drift Instability

Abstract:
This paper considers forecast averaging when the same model is used but estimation is carried out over different estimation windows. It develops theoretical results for random walks when their drift and/or volatility are subject to one or more structural breaks. It is shown that compared to using forecasts based on a single estimation window, averaging over estimation windows leads to a lower bias and to a lower root mean square forecast error for all but the smallest of breaks. Similar results are also obtained when observations are exponentially down-weighted, although in this case the performance of forecasts based on exponential down-weighting critically depends on the choice of the weighting coefficient. The forecasting techniques are applied to 20 weekly series of stock market futures and it is found that average forecasting methods in general perform better than using forecasts based on a single estimation window.

May 7

Michael Pitt (University of Warwick)

 

Abstract:
In this paper we provide a unified methodology in order to conduct likelihood-based inference on the unknown parameters of a general class of discrete-time stochastic volatility models, characterized by both a leverage effect and jumps in returns. Given the non-linear/non-Gaussian state-space form, approximating the likelihood for the parameters is conducted with output generated by the particle filter. Methods are employed to ensure that the approximating likelihood is continuous as a function of the unknown parameters; thus enabling the use of Newton-Raphson type maximization algorithms. Our approach is robust and efficient relative to alternative Markov Chain Monte Carlo schemes employed in such contexts. The technique is applied to daily returns data of various leading stock price indices.

Website

May 8

Achim Zeileis (Wirtschaftsuniversitat wien)

  A Unified Approach to Testing, Monitoring, and Dating Structural Changes

Abstract:
A unified toolbox for testing, monitoring, and dating structural changes is presented for a general class of models in an M-estimation framework, including least squares and (quasi-)maximum likelihood. All techniques employ a model's objective function or associated estimating function, respectively; inference is established based on a functional central limit theorem that holds under the null hypothesis of structural stability. The resulting set of methods includes many well-established techniques, especially for least-squares regression, but also facilitates extension to a wide class of other models. The usefulness of this approach is illustrated by assessing the stability of "de facto" exchange rate regimes where a (quasi-)normal regression model is adopted to capture changes in the error variance as well as the regression coefficients. The toolbox is used for investigating the Chinese exchange rate regime after China gave up on a fixed exchange rate to the US dollar in 2005 and tracking the evolution of the Indian exchange rate regime from 1993-2007.

May 14

Vasilis Sarafidis (University of Sydney)

 

To Pool or Not To Pool: A Partially Heterogeneous Alternative

Abstract:
This paper proposes a new modeling framework for the analysis of panel
data based on the concept of `partitional clustering'. In particular,
the population of cross-sections is grouped into clusters, such that
parameter homogeneity is maintained only within clusters. To determine
the (un-known) number of clusters we put forward an information-based
criterion, which, as we prove, is strongly consistent for fixed T --
in other words, it selects the correct number of clusters with
probability one as the number of cross-sections grows large.
Simulation experiments show that the proposed criterion performs
well even with moderately small N. We apply the method in a panel data
set of commercial banks and we find significant differences in the slope parameters of the estimated cost function.

 June 10

 Frank Diebold (University of Pennsylvania)

 

Arbitrage-Free Dynamic Nelson-Siegel Yield Curve Modeling

Abstract:
We derive the class of arbitrage-free affine dynamic term structure models that approximate the widely-used Nelson-Siegel yield-curve specification. Our theoretical analysis relates this new class of models to the canonical representation of the three-factor arbitrage-free affine model. Our empirical analysis shows that imposing the Nelson-Siegel structure on the canonical representation of affine models greatly improves its empirical tractability; furthermore, we find that improvements in predictive performance are achieved from the imposition of absence of arbitrage.

Paper

 July 20

André Alves Portela Santos (Universidad Carlos III de Madrid)

 

Venue: H10-31, time 12:00-13:00

Combining VaR predictions


Abstract:
We propose a method to combine (or average) predictions of value-at-risk (VaR) models based on the criteria imposed by Basel II. The method selects the model combination that minimizes the daily capital requirements subjected to restrictions involving the number of VaR violations over the last trading year. Some details regarding the optimization strategy will be discussed, and a practical implementation involving 10 different univariate and multivariate VaR models will be shown.

Organizers

Andreas Alfons
Room: H11-21
Phone: 010-408288
Email: alfons@remove-this.ese.eur.nl

and

Wendun Wang
Room: H11-26,
Phone: 010-4088756
Email: wang@ese.eur.nl

For more information:

Anneke Kop
Room: H11-04
Phone: 010-4081259
Email: eb-secr@remove-this.ese.eur.nl

 

The Econometric Institute Seminars are supported by: