Online Behavioral Research

ERIM Summer School 2026 Course
  • Dates: 23-25 June 2026
    Time: 09:30-12:30 & 14:00-16:00
    Format: Online
    ECTS:
    Instructor: Dr. Gabriele Paolacci
    Fee: €500 (free of charge for ERIM PhD candidates)

    student

Abstract

Thanks to platforms such as Prolific and other vetted panels, researchers across the social sciences can now conduct large-sample surveys and controlled experiments with unprecedented speed and reach. Yet the validity of online data is increasingly threatened by sampling biases, inattentive or impersonating participants, professional respondents, and AI-generated responses.

This course equips students to harness the full potential of online experimentation—including research designs often deemed unfeasible outside the physical lab—while meeting the highest standards of rigor, transparency, and reproducibility expected by leading journals in marketing, management, psychology, and related fields. Participants will learn how to detect and prevent common threats to research validity, implement state-of-the-art quality-control techniques, and design studies that remain credible in a rapidly evolving online research environment.

Beyond avoiding pitfalls, the course emphasizes creativity and innovation in experimental design, showing how digital tools can expand—rather than constrain—the behavioral researcher’s toolkit. Throughout, technical and methodological insights are explicitly tied to participants’ own research, serving the broader aim of strengthening the design, execution, and reporting of their projects, and ultimately increasing their chances of successful publication.

We will cover a broad set of topics essential to conducting high-quality online research, including those outlined below. We will do so through a combination of lectures, demonstrations, and discussions. Critically, all material will be connected to your own research, and you will receive individualized feedback on your projects throughout the course.

-Basic Decisions in Online Research (e.g., sample size, payment, study description/advertising, including preregistration of online studies)

-Sampling Issues (e.g., going beyond WEIRD samples, representative sampling, reducing sampling bias, recruiting rare populations, avoiding imposters)

-Dealing with Threats to Data Quality (e.g., AI agents and bots, AI usage by participants, non-naive participants, inattentive and dishonest participants)

-Advanced Study Designs (e.g., incentivizing participants, stimulus selection, interactive, longitudinal, and cross-cultural designs, studies involving physical interactions, innovative methods)

-Reporting Online Research (e.g., constraints on generality, ensuring reproducibility, implementing open science practices)

-Publishing Online Research (e.g., going beyond scenario studies, increasing external and ecological validity, field studies, “A/B testing”, “real” choices online)

By the end of the course, you will know everything that, in 2026, is critical to design, execute, report, and publish valid online research, irrespective of your field (e.g., marketing, management, psychology, economics, etc.). You will also have made concrete progress on your own research projects.

Four readings are mandatory to make before the course starts:

Mize, T. D., & Manago, B. (2022). The past, present, and future of experimental methods in the social sciences. Social Science Research, 108, 102799.

Chandler J. (2023) Participant Recruitment. In: Nichols AL, Edlund J, eds. The Cambridge Handbook of Research Methods and Statistics for the Social and Behavioral Sciences: Volume 1: Building a Program of Research. Cambridge Handbooks in Psychology. Cambridge University Press; 179-201.

Munafò, M. R., Nosek, B. A., ... & Ioannidis, J. (2017). A manifesto for reproducible science. Nature Human Behaviour, 1(1), 1-9.

Simmons, J., D Nelson, L., & Simonsohn, U. (2021). Pre‐registration: Why and how. Journal of Consumer Psychology, 31(1), 151-162. 

LeBel, E. P., McCarthy, R. J., Earp, B. D., Elson, M., & Vanpaemel, W. (2018). A Unified Framework to Quantify the Credibility of Scientific Findings. Advances in Methods and Practices in Psychological Science, 1, 389-402.

More literature will be shared when the course starts, for students to integrate with the course material (slides, tutorials)

Assessment 
Assessment will be based on a take-home assignment that you will hand in approximately one week after the course. You will outline an online research approach to address your own research question, detailing your study design and explaining how your implementation and reporting choices mitigate the key threats discussed during the course.

At the beginning of the course, I will provide you with a slide-deck template designed to prompt and structure your critical design decisions. Throughout the course, you are encouraged to work on your project, progressively complete the slide deck, and receive feedback from me. At the end of the course, you will submit a short recorded video presentation accompanying your slides, which will serve as the basis for assessment.

Workload 
Pre-readings: 10 hours
Online sessions: 18 hours
Self-study (literature, slides, tutorials): 26 hours
Assignment/project: 30 hours
(Total 84 hours)

Attendance 
Attendance is mandatory, though you will not be penalized if you have to miss a session. The course certificate will be issued only to participants who successfully completed the course assessment.

Contact

Compare @count study programme

  • @title

    • Duration: @duration
Compare study programmes