A new interdisciplinary study led by Dr Aviv Barnoy (Erasmus University Rotterdam), together with Dr. Ori Freiman, Prof. Arnon Keren, and Prof. Boaz Miller, introduces a fresh way to understand and moderate toxic content online. Published in Social Epistemology, the research grew out of a collaboration with OpenWeb, a major community-engagement platform seeking better tools to support healthy online discussions.
The team began by tackling a basic problem: although “toxic content” is a common term in moderation and policy debates, it remains poorly defined. Their study proposes a clearer framework, showing that toxicity is best understood as a violation of conversational norms rather than a single, catch-all category. This makes it easier to tell different types of harmful content apart—and to decide how to address each one.
The researchers identify two main kinds of norm violations. The first is epistemically toxic content. This breaks norms around truth and responsible information sharing. It includes false, misleading, or conspiratorial claims. The second norm violation is civilly toxic content, which violates norms of respectful interaction, such as hate speech, doxxing, personal attacks, sexually explicit remarks, and threats.
This distinction matters for moderation. Civilly toxic content often requires swift removal to protect users. Epistemically toxic content, however, may be handled through measures like contextual information, friction, counterspeech, or labels rather than deletion.
The authors note that their partnership with OpenWeb provided valuable insights into real-world moderation challenges. They also credit peer reviewers for helping sharpen the article’s arguments. The full paper, “Norm Violations in Online Discourse: Epistemic and Civil Foundations for Platform Design and Moderation,” is openly accessible and aims to support researchers, moderators, and platform policy teams working to improve online conversations.
- Researcher
- More information
You can find the full publication here.