Join us for this seminar.
- Researcher
- Date
- Monday 9 Mar 2026, 11:00 - 12:30
- Type
- Seminar
- Spoken Language
- English
- Room
- T09-67 or Teams
- Ticket information
Microsoft Teams
Join the meeting now
Meeting ID: 329 568 505 327 1
Passcode: or2ck6BY
Using an eight-week field experiment at a major European retailer, we examine whether deviations from AI-based recommendations reflect tacit expertise in settings where workers have limited autonomy to challenge them. We study an AI-augmented inventory auditing routine in which an AI flags items with potential inventory inaccuracies for retail salespersons to verify. In the treatment condition, we deployed a random forest model to capture individual adherence preferences and modified the AI routine to test whether incorporating observed deviations improves operational performance. Our results show that integrating worker deviations increases routine completion by 3% and boosts sales by 2% in highly volatile stores, while reducing sales by 1% in stable stores. These findings suggest that in low-autonomy settings, deviations from AI recommendations are often meaningful, signalling tacit expertise rather than behavioural error, and that their value is greatest when environments are volatile and demand continuous adaptation. We identify three mechanisms (operational, temporal, and physical awareness) through which workers calibrate their adherence to AI systems, and we introduce the concept of expertise eclipsing to describe how AI systems can systematically obscure the knowledge of low-autonomy workers.
- More information
Contact: Lianne Speijer / Stef Lemmens
