Strategies to improve the quality of macroeconomic forecasts


Macroeconomic forecasts for key variables like GDP growth, inflation and unemployment are usually reasonably accurate, except when it matters, that is, at the eve and dawn of recessions. Most such forecasts are a combination of forecasts from econometric models and expert judgement. It is known that econometric models face a hard time predicting recessions almost by definition, and one would expect and hope that expert judgement would be beneficial in crucial times. Well, apparently it is not, as we may have learned from the 2008/2009 recession. So something goes wrong.

The scarce literature on evaluating model-based forecasts against expert adjusted forecasts shows that experts seem to overreact on news and also then to over-adjust. These insights have however limited impact, as in most, if not almost all, cases, it is unknown what the underlying econometric model forecasts are, if there are any. IMF, World Bank, OECD, ECB, and various other banks and institutes, give forecasts without telling how they create them.

This groundbreaking project, in which interviews and experiments will be carried out by actual forecasters, seeks to understand the origins of available forecasts to unravel which part is judgement, and to see which of the potentially many cognitive biases are at stake when these forecasts are made. This is done using various econometric techniques and by field interviews.

The project should result in a PhD dissertation and one or two publications in international journals.


Macroeconomic forecasts; accuracy; expert adjustment, forecast bias; improvement


Macroeconomic forecasts are a key input to macroeconomic policies issued by governments and central banks. These forecasts typically concern important variables like growth in Gross Domestic Product (GDP), unemployment and inflation. The forecasts usually provide an outlook on the short- (current year and next year) to medium-term (five years) developments.

Econometric models can form the basis for macroeconomic forecasts and in reasonably prosperous times these models tend to do well. Unfortunately, when times radically change, most econometric models by their very nature are not qualified to predict turning points. Such turning points are key features of the business cycle chronology, as they mark the eve of a recession or the beginning of a prosperous period. It is therefore common practice to base macroeconomic forecasts on the outcome of an econometric model combined with expert judgement or in fact, to use no econometric model at all.

Such a judgement, with or without econometric model input, does not always lead to success and this can be observed for the 2008/2009 recession. In those years for the USA, real GDP growth was -0.3 and -2.8, respectively. In the June 2008 survey of the Consensus Forecasters the average quote for 2008 was 1.5 (ranging from 0.8 to 1.9), while the average quote for 2009 was 1.7 (with a highest and lowest score of 3.1 and 0.6, respectively). Even in the November 2008 survey, the average quote for the very same year, 2018, was 1.4 (with high and low 1.5 and 1.3, respectively), whereas the average quote for 2009 now was -0.6 (with individual quotes ranging from 1.2 to -2.1). Apparently, either the econometric model forecasts were off track or expert adjustment or both.

In recent years, we have seen much research on improving econometric models, sometimes using higher frequency data, and in other cases including more variables or components. Also, modern machine learning tools have been developed, where novel search algorithms are implemented. Big data in some dimensions (think of retrieving prices data from the internet with availability per minute) are exploited, and also text mining using web crawlers seem to be promising avenues.

This project is however not on those econometric and machine learning techniques, but it will focus on the judgement part. It is widely understood that no model or technique can incorporate everything that is relevant and there will always be unforeseen events that may be addressed in the forecasts. Think of natural disasters, sudden bursts of price bubbles (we know that it will happen, but when?), radical changes of political activities (think of the US - North Korea talks) and other watershed events.

In this project, we will examine whether professional forecasters make improper judgements, potentially driven by heuristics and biases. Which of these biases hurt most? We will examine how we can teach forecasters to act differently when they become aware of these biases and heuristics. Moreover, we will examine if different actions really lead to forecasts that are more accurate.  Together this should lead to a list of do’s and don’ts when creating forecasts.

Franses, P.H., H. Kranendonk, and D. Lanser (2011), One model and various experts: Evaluating Dutch macroeconomics forecasts, International Journal of Forecasting, 27, 482-495.

Franses, P.H. and N. Maassen (2017), Consensus forecasters: How good are they individually and why? Journal of Management Information and Decision Sciences (20).


This entire project consists of five subprojects, but for this PhD project we start with the first two. The focus is on the forecasts delivered by various institutions. These are the Dutch Central Bank (DNB), the Netherlands Bureau for Economic Policy Analysis (CPB), the European Central Bank (ECB), the Federal Reserve Board (Federal Open Market Committee, FOMC), the International Monetary Fund (IMF), the Organisation for Economic Cooperation and Development (OECD), the Survey of Professional Forecasters (SPF) and the Consensus Economics Forecasters (Consensus). We have access to the forecasts and we shall have access to the forecasters themselves. The latter is crucial as it will be important to actually speak to them. The variables of interest are growth in Gross Domestic Product (GDP growth), inflation and the unemployment rate, although extensions to other variables can be considered too. The economies of interest are mostly the industrialized countries in America, Europe, and Australasia, but whenever possible the focus can also include various developing countries, as long as data are available.

The first subproject concerns the econometric analysis of the forecasts themselves. For this analysis, we will use various metrics. A first is the forecast error. A second metric is based on an econometric model, to be created by the PI and a PhD student, which provides a benchmark model, which allows comparing the final forecasts with a newly created model forecast. This first project amounts to a meta-analysis of the available forecasts created by DNB, CPB, ECB, FOMC, IMF, OECD, SPF and Consensus. The study will show what the forecasters are doing and it will indicate the relative accuracy of their forecasts.

In the second project, the PI and the PhD student will try to elicit the biases and heuristics from the interviews and the actual forecasts. An example of a potentially relevant bias could be that forecasters adhere to the so-called Law of Small Numbers (LSN), which according to Rabin (2002) is that people “exaggerate how likely it is that a small sample resembles the parent population from which it is drawn.” Other biases that could be at play are “availability bias” and “over-optimism”. A potential social phenomenon for professional forecasters could be herding, where age and experience may mediate. In sum, this second project aims to elicit which biases and heuristics are most prominent for macro-economic forecasters. In part, the evidence will be based on the surveys, but an analysis of the forecast errors can be revealing too. For example, if the forecast error is more often negative than positive, then the forecasts would have been too optimistic. In the end, this second subproject leads to a limited set of most prevalent biases and heuristics, which have the largest impact on forecast quality.

Literature references

  • Ang, A., G. Bekaert, and M. Wei (2007), Do macro variables, asset markets, or surveys forecast inflation better? Journal of Monetary Economics, 54, 1163-1212.
  • Athanasopoulos, G., and Hyndman, R.J. (2011), The value of feedback in forecasting competitions, International Journal of Forecasting, 27, 845-849
  • Barber, B. and T. Odean (2001), Boys will be boys: Gender, overconfidence, and common stock investment, Quarterly Journal of Economics, 116, 261-292. 
  • Bernhardt, D., M. Capello, and E. Kutsoati (2006), Who herds?, Journal of Financial Economics, 80, 657-675.
  • Beyer, S. and E. Bowden (1997), Gender differences in self-perceptions: Convergent evidence from three measures of accuracy and bias, Personality and Social Psychology Bulletin, 23, 157-172. 
  • Blattberg, R.C. and S.J. Hoch (1990), Database models and managerial intuition: 50% model + 50% manager, Management Science, 36, 887-899
  • Boulaksil, Y. and P.H. Franses (2009), Experts’ stated behavior, Interfaces, 39, 168-171
  • Bunn, D.W. and A.A. Salo (1996), Adjustment of forecasts with model consistent expectations, International Journal of Forecasting, 12, 163-170.
  • Camerer, Colin F. (1989), Does the basketball market believe in the “hot hand”?, American Economic Review, 74, 1257-1261. 
  • Cerf, C. and V. Navasky (1998), The Experts Speak: The Definitive Compendium of Authoritative Misinformation, New York: Villard.
  • Clements, M.P. and A.B. Galvao (2008), Macroeconomic forecasting with mixed-frequency data, Journal of Business and Economic Statistics, 26, 546-554.
  • Croson, R. and U. Gneezy (2009), Gender differences in preferences, Journal of Economic Literature, 47, 448-474.
  • Daniel, K. and D. Hirshleifer (2015), Overconfident investors, predictable returns, and excessive trading, Journal of Economic Perspectives, 29 (4), 61-88
  • De Bondt, W.F.M. and R.H. Thaler (1987), Further evidence on investor overreaction and stock market seasonality, Journal of Finance, 42, 557-581.
  • Durham, G.R., M.G. Hertzel, and J. S. Martin (2005), The market impact of trends and sequences in performance: New evidence, Journal of Finance, 60, 2551-2569.
  • Eckel, C.C., and P.J. Grossman (2008), Forecasting risk attitudes: An experimental study using actual and forecast gamble attitudes, Journal of Economic Behavior and Organization, 68, 1-17.
  • Fildes, R., P. Goodwin, M. Lawrence and K. Nikopoulos (2009), Effective forecasting and judgemental adjustments: an empirical evaluation and strategies for improvement in supply-chain planning, International Journal of Forecasting, 25, 3-23.
  • Franses, P.H. (2004), Do we think we make better forecasts than in the past?: A survey of academics, Interfaces, 34, 466-468.
  • Franses, P.H. (2014), Expert Adjustments of Model Forecasts; Theory, Practice and Strategies for Improvement, Cambridge UK: Cambridge University Press.
  • Franses, P.H., H. Kranendonk, and D. Lanser (2011), One model and various experts: Evaluating Dutch macroeconomics forecasts, International Journal of Forecasting, 27, 482-495.
  • Franses, P.H. and R. Legerstee (2009), Properties of expert adjustments on model-based SKU-level forecasts, International Journal of Forecasting, 25, 35-47.
  • Franses, P.H. and R. Legerstee (2010), Do experts’ adjustments on model-based SKU-level forecasts improve forecast quality? Journal of Forecasting, 29, 331-340.
  • Franses, P.H. and D.J.C. van Dijk (2018), Combining expert-adjusted forecasts, Working Paper, Presented at the International Symposium of Forecasting, Santander, Spain, 2016.
  • Franses, P.H., D.J.C. van Dijk and A. Opschoor (2014), Time Series Models for Business and Economic Forecasting, Second revised edition, Cambridge: Cambridge University Press.
  • Gysler, M., J.B. Kruse and R. Schubert (2002), Ambiguity and gender differences in financial decision making: An experimental examination of competence and confidence effects, Unpublished working paper, Swiss Federal Institute of Technology.
  • Heath, C. and A. Tversky (1991), Preference and belief: Ambiguity and competence in choice under uncertainty, Journal of Risk and Uncertainty, 4, 5-28.
  • Kahneman, D. (2012), Thinking, Fast and Slow, London: Penguin.
  • Kliger, D. and O. Levy (20010), Overconfident investors and probability misjudgements, Journal of Socio-Economics, 39, 24-29.
  • Lamont, O.A. (2002), Macroeconomic forecasts and microeconomic forecasters, Journal of Economic Behavior & Organization, 48, 265-280.
  • Legerstee, R. and P.H. Franses (2014), Do experts’ SKU forecasts improve after feedback? Journal of Forecasting, 33, 69-79.
  • Malmendier, U. and T. Taylor (2015), On the verges of overconfidence, Journal of Economic Perspectives, 29 (4), 3-8.
  • Mathews, B.P. and A. Diamantopoulos (1986), Managerial intervention in forecasting: An empirical investigation of forecast manipulation, International Journal of Research in Marketing, 3, 3-10.
  • Mathews, B.P. and A. Diamantopolous (1989), Judgemental revision of statistical forecasts: A longitudinal extension. Journal of Forecasting, 8, 129-140.
  • Mathews, B.P. and A. Diamantopolous (1990), Judgemental revision of sales forecasts: Effectiveness of forecast selection, Journal of Forecasting, 9, 407-415.
  • Meehl, P. (1954), Clinical Versus Statistical Prediction, Minneapolis:  University of Minnesota Press.
  • Patton, A.J. and A. Timmermann (2007), Testing forecast optimality under unknown loss, Journal of the American Statistical Association, 102, 1172-1184.
  • Pierdzioch, Ch., J.-C. Ruelke, and G. Stadtmann (2013), Forecasting metal prices: Do forecasters herd?, Journal of Banking and Finance, 37, 150-158.
  • Powell, M. and D. Ansic (1997), Gender differences in risk behaviour in financial decision-making: An experimental analysis, Journal of Economic Psychology, 18, 605-628.
  • Rabin, M. (2002), Inference by believers in the Law of Small Numbers, Quarterly Journal of Economics, 117, 775-816
  • Tetlock, Ph. And D. Gardner (2015), Superforecasting: The Art & Science of Prediction, London: Random House Books.
  • Tversky, A. and D. Kahneman (1971), Belief in the Law of Small Numbers, Psychological Bulletin, 76, 105-110.
  • Tversky, A. and D. Kahneman (1974), Judgement under uncertainty: Heuristics and biases, Science, 185, 1124-1131.



Expected output

While documenting the actual performance of forecasters, the project entails a second part, which deals with actually asking the forecasters what it is that they do. The supervisor has experience with running surveys amongst academics (Franses, 2004) and amongst individuals who modify model-based forecasts (Boulaksil and Franses, 2009) A much more extensive survey will be held amongst professional forecasters. This survey includes the forecasters at institutes like DNB, CPB, FED, IMF, OECD, but the individual forecasters involved in SPF and Consensus shall also be contacted. This sample will exceed 50 individuals. At the same time, we will hold an email survey amongst the authors of published articles for example in the International Journal of Forecasting and the Journal of Forecasting, in the last ten or twenty years. Next, we will hold in-depth interviews with at least 10 staff members of DNB and the CPB, who are responsible for the actual forecasts these institutes deliver. This will lead to a PhD dissertation, and one or two publications in international journals like the International Journal of Forecasting and Quarterly Journal of Economics, and presentations at international conferences (like the EEA/ESEM).

Scientific relevance

A PhD dissertation, and one or two publications in international journals like the International Journal of Forecasting and Quarterly Journal of Economics, and presentations at international conferences (like the EEA/ESEM).

Societal relevance.

The key objectives of this project are to discover and implement strategies to improve the quality of macroeconomic forecasts. As interviews and experiments will be carried out by actual forecasters, this project is ground-breaking and fully novel.

PhD candidate profile

An econometrician


Prof. dr. Philip Hans Franses
T: +31 (0)10 4081273

Prof. dr. Dick van Dijk
T: +31 (0)10 4081263

Graduate school

This project is affiliated with the Tinbergen Institute graduate school, applicants for this project need to pass the Tinbergen Institute's admission requirements before they can be considered for a PhD position at Erasmus School of Economics.

Note that the Tinbergen Institute requires valid GRE General Test results from all applicants. More information about the GRE test is available here. Be aware that available seats for this test fill up very fast so book your test well in advance. Please contact the GRE programme for specific questions about the GRE test.


Application deadline: 15 January 2019


Apply for this project using our online application form. Please use the project code below to apply for this project.

Tinbergen project code: