Digital Platforms: Causal Inference & Quantitative Design

ERIM Summer School 2026 Course
  • Dates: 29, 30 June & 1, 2 July 2026
    Time: 10:00-12:45
    Format: Online
    ECTS: 2
    Instructor: Dr. Martin Quinn
    Fee: €500 (free of charge for ERIM PhD candidates)

Abstract

Learn how to turn messy digital-platform data into credible, publishable causal evidence. This PhD course studies marketplaces, social media, and search/streaming platforms, with a focus on how to find and justify identification in real platform environments. We start from the research question, i.e., defining the right estimand and stakeholder outcomes, and map it to feasible sources of variation, and the data needed to measure them. The course emphasizes modern DiD and event-study designs for platform policies, how to diagnose and stress-test them with falsification and robustness tools, and when (and how) to use experiments, accounting for interference and spillovers. By the end, participants will be able to (i) formulate platform research questions as clear causal estimands, (ii) build and defend DiD/event-study designs for platform shocks, (iii) evaluate platform experiments under spillovers and multi-sided responses, and (iv) use ready-to-adapt R/Python templates and credibility checklists to move from idea to a seminar-ready empirical design.

This online PhD course trains students to use online-platform data to answer causal questions about digital platform. We cover how to identify and measure shocks in platform settings (rollouts, ranking/recommendation updates, moderation/enforcement changes, pricing/monetization moves, and regulatory events), and how to analyze them using modern DiD/event studies and/or with experiments when randomization is feasible—accounting for spillovers and marketplace interactions.

Teaching combines lectures with discussion of pre-assigned papers and hands-on activities (worked examples, design clinics, and structured peer feedback). Students are expected to prepare via readings before each session and learn between sessions through independent study, including optional walkthroughs of provided R/Python templates and brief reflective notes for their own research ideas (not graded).

By the end of the course, students will be able to: 

  1. Translate platform questions into causal questions
  2. Identify credible sources of variation in platform settings and the data needed to study them.
  3. Implement and interpret causal inferences analyses for digital platform.
  4. Diagnose and strengthen causal claims using robustness checks suited to platform data problems.

Design and interpret platform experiments when randomization is feasible.

Session 1

  • Goldfarb, A. & Tucker, C. (2019). “Digital Economics”, Journal of Economic Literature

Session 2

  • Quinn, M., Godinho de Matos, M. & Peukert, C. (2023). “The Welfare Effects of Mobile Internet Access: Evidence from Roam-Like-at-Home.”, The Economic Journal
  • Sun, L. & Abraham, S. (2021). “Estimating Dynamic Treatment Effects in Event Studies with Heterogeneous Treatment Effects.”, Journal of Econometrics

Session 3

  • Rambachan, A., and Roth J., (2023) “A More Credible Approach to Parallel Trends”, The Review of Economic Studies
  • Goodman-Bacon, A. (2021). “Difference-in-Differences with Variation in Treatment Timing.”, Journal of Econometrics

Session 4

  • Brynjolfsson, E., Collis, A. & Deisenroth, D. (2025). “The Consumer Welfare Effects of Online Ads: Evidence from a Nine-Year Experiment.”, American Economic Review: Insights
  • Peukert, C., Sen, A. & Claussen, J. (2024). “The Editor and the Algorithm: Recommendation Technology in Online News. “, Management Science
  • Holtz, D., Lobel, R., Liskovich, I. & Aral, S. (2025). “Reducing Interference Bias in Online Marketplace Pricing Experiments.”, Management Science.

Assessment 
Assessment is based on active participation; students must attend sessions and complete the readings to pass.

Workload 
Online synchronous: 10 hours, i.e., 4 sessions × 2.5 hours (lecture + discussion + activities/labs) 
Asynchronous / self-study: 46 hours

  • Required pre-reading (papers): 32 hours (assigned before each session)
  • Independent study / consolidation: 14 hours (review notes, optional R/Python template walkthroughs, short personal reflection/design notes—not submitted)

Attendance 
Attendance is mandatory. The course certificate will be issued only to participants who have fulfilled all course requirements, which can include:

  1. Required attendance at the course sessions.
  2. Successful completion of the course assessments in accordance with the assessment criteria.

Contact

Compare @count study programme

  • @title

    • Duration: @duration
Compare study programmes