Join us for the research seminar
Abstract
Evaluating and selecting among numerous alternative solutions shapes the trajectory and rate of innovation. Central to this process is a fundamental tension between novelty and feasibility that evaluators, operating under bounded rationality, cannot consider simultaneously and therefore rely on heuristics to guide their evaluations. A common heuristic is criteria-sequencing, in which evaluators prioritize alternative criteria at different evaluation stages. Yet, the idiosyncratic ways evaluators sequence these criteria often introduce inconsistencies, creating significant path dependencies in the process. In this paper, we propose that artificial intelligence (AI) offers a potential lever to structure evaluators’ criteria-sequencing heuristics. Leveraging a field experiment with 353 evaluators, we investigate how the sequencing of AI recommendations focusing on novelty and feasibility shapes the mean and variance of innovation among selected solutions. Our results reveal a mean–variance innovation tradeoff: a feasibility-then-novelty sequence leads to selections with higher mean innovation, whereas a novelty-then-feasibility sequence yields selections with greater innovation variance. Furthermore, a post hoc analysis uncovers that the format accompanying AI recommendations also matters. A dynamic format (i.e., interactive chatbot) increases the innovation variance among selected solutions but reduces their mean innovation relative to a static format (i.e., fixed explanatory content). Because these effects operate independently, our findings show that in AI-augmented evaluations, both the sequence of criteria and the format accompanying AI recommendations shape the mean–variance innovation tradeoff. These differences have important implications for the composition of innovation portfolios. Our paper contributes to innovation evaluation research and to emerging literature on human–AI collaboration in innovation-related contexts.
