Schedule Spring 2010

Venue: H10-31
Time: 15:30.

Feb. 11

Steffen Grønneberg (University of Oslo)

 

The Copula Information Criterion

Abstract:

The maximum pseudo likelihood estimator is a popular method for fitting parametric copulae to iid data. Asymptotic properties such as consistency, root-n normality and behavior with respect to goodness-of-fit functionals are well-developed, but many investigations and papers have used the unmodified AIC-formula for model selection, ignoring the fact that the pseudo likelihood is not a proper likelihood. We derive a model selection formula called the copula information criterion, and show that model selection formulas in the style of the AIC can fail to exist for copulae with extreme behavior near the edge of the unit cube (such as copulae with tail-dependence). This can be seen as a demarcation for which types of copulae is suited for being estimated by the maximum pseudo likelihood estimator when the parametric model is believed to be wrong.

Link paper op:

http://www.math.uio.no/eprint/stat_report/2008/07-08.html

Feb. 18 

Helmut Lütkepohl (European University Institute, Florence)

 

Structural Vector Autoregressions with Markov Switching

Abstract:

It is argued that in structural vector autoregressive (SVAR) analysis a Markov regime switching (MS) property can be exploited to identify shocks if the reduced form error covariance matrix varies across states. The model setup is formulated and discussed and it is shown how it can be used to test restrictions which are just-identifying in a standard structural vector autoregressive analysis. The approach is illustrated by two SVAR examples which have been reported in the literature and which have features that can be accommodated by the MS structure.

Paper (PDF-format)

Venue: H10-31

Feb. 25

Stanislav Anatolyev (New Economic School, Moscow)

 

Sequential Testing with Uniformly Distributed Size

Abstract:

Sequential procedures of testing for structural stability do not provide enough guidance on the shape of boundaries that are used to decide on acceptance or rejection, requiring only that the overall size of the test is asymptotically controlled. We introduce and motivate a reasonable criterion for a shape of boundaries which requires that the test size be uniformly distributed over the testing period. Under this criterion, we numerically construct boundaries for most popular sequential tests that are characterized by a test statistic behaving asymptotically either as a Wiener process or Brownian bridge. We handle this problem both in a context of retrospecting a historical sample and in a context of monitoring newly arriving data. We tabulate the boundaries by fitting them to certain flexible but parsimonious functional forms. Interesting patterns emerge in an illustrative application of sequential tests to the Phillips curve model.


See link: http://ideas.repec.org/p/cfr/cefirw/w0123.html
Paper; http://www.cefir.ru/papers/WP123_new.pdf

Venue H10-31

March 11

Enrique Sentana (CEMFI, Madrid)

 

New Testing Approaches for Mean-Variance Predictability

abstract
We propose tests for smooth but persistent serial correlation in risk
premia and volatilities that exploit the non-normality of financial
asset returns. Our parametric tests are robust to distributional
misspecification, while our nonparametric tests are as powerful as if we
knew the true distribution of excess returns. Local power analyses
confirm their gains over existing methods, while Monte Carlo exercises
document their finite sample reliability. We apply our methods to the
Fama-French factors for US stocks. We find mean predictability for the
size and value factors but not the market, and variance predictability
for all of them.

paper: zie ftp://ftp.cemfi.es/pdf/papers/es/mvpredictability.pdf

Venue: H10-31

April 8

Angelo Ranaldo (Swiss National Bank)

 

Limits to Arbitrage During the Crisis: Funding Liquidity Constraints and Covered Interest Parity

Abstract
Bayesian optimal design theory provides a solid basis for coping with the problem of design dependence on the unknown parameters in stated preference studies. The Bayesian design criterion that is used in published work on the optimal design of stated preference studies is based on the Fisher information matrix. However, several other Bayesian design criteria exist, some of which are known to have better finite sample properties than criteria based on the Fisher information matrix. The alternative design criteria are based on the generalized Fisher information matrix, the expected posterior covariance matrix, and the expected gain in Shannon information. In this study, we apply these alternative Bayesian design criteria in the context of stated preference studies and compare the performance of the resulting stated preference designs. We investigate in detail how well the designs perform in terms of the design criteria for which they were not optimized, and study situations where the stated preference data are analyzed in a Bayesian fashion and in a non-Bayesian fashion (using maximum likelihood). Our simulation results favor a Bayesian design criterion based on the generalized Fisher information matrix, as it appears to be the only computationally feasible criterion that can compete with the overall best criterion, which is based on the expected posterior covariance matrix.

Paper (pdf-format)

May 6

Michael Wolf (Universität Zürich)

 

Fund-of-Funds Construction by Statistical Multiple Testing Methods

Abstract
Fund-of-funds (FoF) managers face the task of selecting a (relatively) small number of hedge funds from a large universe of candidate funds. We analyse whether such a selection can be successfully achieved by looking at the track records of the available funds alone, using advanced statistical techniques. In particular, at a given point in time, we determine which funds significantly outperform a given benchmark while, crucially, accounting for the fact that a large number of funds are examined at the same time. This is achieved by employing so-called multiple testing methods. Then, the equal-weighted or the global minimum variance portfolio of the outperforming funds is held for one year, after which the selection process is repeated. When backtesting this strategy on two particular hedge fund universes, we find that the resulting FoF portfolios have attractive return properties compared to the 1/N portfolio (that is, simply equal-weighting all the available funds) but also when compared to two investable hedge fund indices.

Paper (pdf-format)

Venue: H10-31

May 20

Matteo Ciccarelli (European Central Bank, Frankfurt)

  Trusting the Bankers: A New Look at the Credit Channel of Monetary Policy

Abstract
The identification of the credit channel is challenging because the changes in demand and supply of credit are difficult to measure. To solve this problem we use the detailed answers on credit supply and demand from the unique, confidential Euro area Bank Lending Survey and from the U.S. Senior Loan Officer Survey. Embedding this information within an otherwise standard VAR model, we find that: (1) the credit channel of monetary policy is active through the balance-sheets of households, firms and banks; (2) the impact of a monetary policy shock on GDP is higher through credit supply than demand; (3) the bank lending is stronger than the balance-sheet channel for firms, whereas the latter is stronger for households; (4) a credit crunch (for firms in the Euro area and for households in the US) has contributed to reducing GDP in the recent banking crisis. The expansionary monetary policy has partly counterbalanced this decline in the Euro area.

Paper (pdf-format)

Venue: H10-31

May 27

Jaques Commandeur (SWOV)

 

Multivariate Multiple-cohort Models of Latent Accident Risk in Time Series

Abstract:
We introduce a multivariate model framework for the simultaneous time-series analysis of accident risk and exposure of multiple-cohort data, and apply it to exposure and fatality data from six Australian states, and to exposure and fatality data from ten European countries. Road safety time-series data are often available at a disaggregated level, where accident counts and exposure measures are separated into different geographic or demographic cohorts. Existing time-series models of road accident data do not allow for correlations and the possible existence of common factors across cohorts, which can mean valuable information is ignored. In order to address this issue, we develop a multivariate latent variable model of traffic accident risk and exposure, which is used to compare and benchmark road safety across Australian states and European countries. Two variations of the model are presented. The first version allows the different cohorts to have separate but correlated accident risk and exposure developments. The second version uses a common factor approach to investigate idiosyncratic developments in risk and exposure. Safety developments across cohorts are shown to be significantly correlated, a fact that can be exploited by common factor models.

 

Venue: H10-31

June 3

Martina Vandebroek (Katholieke Universiteit Leuven)

 

A Comparison of Different Bayesian Design Criteria for Constructing
Efficient Discrete Choice Experiments

Abstract:
Bayesian optimal design theory provides a solid basis for coping with
the problem of design dependence on the unknown parameters in stated
preference studies. The Bayesian design criterion that is used in
published work on the optimal design of stated preference studies is
based on the Fisher information matrix. However, several other Bayesian
design criteria exist, some of which are known to have better finite
sample properties than criteria based on the Fisher information matrix.
The alternative design criteria are based on the generalized Fisher
information matrix, the expected posterior covariance matrix, and the
expected gain in Shannon information. In this study, we apply these
alternative Bayesian design criteria in the context of stated preference
studies and compare the performance of the resulting stated preference
designs. We investigate in detail how well the designs perform
in terms of the design criteria for which they were not optimized, and
study situations where the stated preference data are analyzed in a
Bayesian fashion and in a non-Bayesian fashion (using maximum
likelihood). Our simulation results favor a Bayesian design criterion
based on the generalized Fisher information matrix, as it appears to be
the only computationally feasible criterion that can compete with the
overall best criterion, which is based on the expected posterior
covariance matrix.

Venue: H10-31

June 10

Frank Tuyl (University of Newcastle, Australia)

 

Estimation of the binomial parameter: in defence of Bayes (1763)

Abstract:
For Bayesian estimation of the binomial parameter, when the aim is to "let the data speak for themselves", the uniform or Bayes-Laplace prior appears preferable to the reference/Jeffreys prior recommended by objective Bayesians like Berger and Bernardo. Here confidence intervals tend to be "exact" or "approximate", aiming for either minimum or mean coverage to be nominal. The latter criterion tends to be preferred, subject to "reasonable" minimum coverage. I will give examples of how the highest posterior density credible interval based on the uniform prior appears to outperform both common approximate intervals and Jeffreys prior based intervals, which usually represent credible intervals in review articles. The above "coverage" is frequentist or unconditional, but Bayesian or conditional coverage is of interest also. I will give examples of how the concept of conditional coverage may show the inadequacy of certain confidence intervals, suggesting that this type of coverage is more relevant than unconditional coverage. Coverage with respect to a prior has been referred to as "averaging", but in the context of additional (frequentist) averaging across the random variable X; conditional coverage is about omitting the latter step. It would seem of interest then how competing noninformative priors perform when faced with "each other's" averaging, from a robustness point of view. Taking this approach, it appears that for both the Poisson and the binomial model, the uniform rather than the Jeffreys prior is the clear winner.

Venue: H10-31

June 17

Carolin Strobl (Ludwig-Maximilians-Universität München

 

Accounting for Individual Differences in Paired Comparisons

Abstract:
The preference scaling of a group of subjects may not be homogeneous, but different groups of subjects with certain characteristics may show different preference scalings, each of which can be derived from paired comparisons by means of the Bradley-Terry model. Usually, either different models are fit in predefined subsets of the sample, or the effects of subject covariates are explicitly specified in a parametric model. In both cases, categorical covariates can be employed directly to distinguish between the different groups, while numeric covariates are typically discretized prior to modeling. Here, a semi-parametric approach for recursive partitioning of Bradley-Terry models is introduced as a means for identifying groups of subjects with homogeneous preference scalings in a data-driven way. In this approach, the covariates that - in main effects or interactions - distinguish between groups of subjects with different preference orderings, are detected automatically from the set of candidate covariates. One main advantage of this approach is that sensible partitions in numeric covariates are also detected automatically. Application areas include attitude measurement and market segmentation.

Link: http://epub.ub.uni-muenchen.de/10588/1/BTL2mob.pdf

Venue: H10-31

Organizers

Andreas Alfons
Room: H11-21
Phone: 010-408288
Email: alfons@remove-this.ese.eur.nl

and

Wendun Wang
Room: H11-26,
Phone: 010-4088756
Email: wang@ese.eur.nl

For more information:

Anneke Kop
Room: H11-04
Phone: 010-4081259
Email: eb-secr@remove-this.ese.eur.nl

 

The Econometric Institute Seminars are supported by: