Join us for the marketing seminar
Abstract
Scientists and practitioners are aggressively moving to deploy digital twins - LLM-based models of real individuals – across social science and policy research. We conducted 19 pre-registered studies with 164 diverse outcomes (e.g., attitudes towards hiring algorithms, intention to share misinformation) and compared human responses to those of their digital twins (trained on each person’s previous answers to over 500 questions). We find the digital twins’ answers are only modestly more accurate than those from the (homogeneous) base LLM and correlate weakly with human responses (average r = 0.20). We document five ways in which digital twins distort human behavior: (i) stereotyping, (ii) insufficient individuation, (iii) representation bias, (iv) ideological biases, (v) hyper-rationality. Together, our results caution against the premature deployment of twins, which may systematically misrepresent human cognition and undermine both scientific understanding and practical applications.
