In offline or crowdsourced web experiments, behavioral scientists and consumer researchers often want to identify the intervention with largest effect size. This is sometimes done with highly dimensional factorial designs, such as megastudies of nudges.
Current practice consists of randomising factors across observations. This is often underpowered - failing to identify the best intervention - and always inefficient - by administering suboptimal factors too often.
Adaptive factorial designs
In this research I introduce a methodology for adaptive factorial designs to identify the best factor. The methodology leverages recently introduced algorithms in machine learning and operations research to adaptively allocate each factor. Furthermore, the methodology allows researchers to stop data collection once sufficient evidence has been collected, without pre-specifying the sample size.
Simulation experiments show large gains over standard randomisation as well as overperforming alternative algorithms. A few extensions deal with interaction effects and multiple hypotheses testing. A Python package automatises the methodology via Qualtrics, facilitating easy implementation in practice.