AI is everywhere, from social media algorithms and smart thermostats to word processing tools. Almost everyone uses it to some extent, and it is widely believed to help us work faster and more efficiently. At least, that is the prevailing view of AI, but is it actually correct? And what does this mean for lawyers? These are precisely the questions explored by Cees Zweistra, Assistant Professor of law, ethics and technology, and Julie Hoppenbrouwer, PhD candidate in disruptive technologies in legal practice, in their new book 'Sneller & Beter? Het vraagstuk van technologie en AI in de rechtspraktijk'. In this article, they elaborate on the impact of AI on legal practice and explain why lawyers should take a critical view of its use.
"Preliminary, as yet unpublished research conducted by the Center for Law, AI & Design in 2025 shows that the vast majority of surveyed lawyers (>90%) use (legal) generative AI systems such as CoPilot, Harvey and CoCounsel. These tools are used for tasks such as structuring ideas, generating summaries, translations and improving documents," say Zweistra and Hoppenbrouwer.
"The use of generative AI, specifically large language models (LLMs) for knowledge production, appears to be becoming increasingly dominant in legal practice. Such models make it possible to 'outsource' the interpretation of the law. In the long term, this may have significant implications for legal practice," Hoppenbrouwer explains. Zweistra emphasises the potential consequences of this development: "Providers of legal AI systems will gain a more dominant position within the law and legal practice, and may even influence the development of the law itself. This raises questions about the future independence of actors in legal practice and about how the legal profession can ensure that the interests of AI providers and new AI tools align with the interests of legal practice. After all, independence is a crucial pillar of legal practice, and it is under pressure."
Does AI solve problems or does it also create new ones in (legal) practice?
"We observe that legal practice often looks for possible applications of AI, rather than identifying problems for which AI could offer a solution. This needs to be reversed: the problem should be central, after which a suitable solution can be sought," says Hoppenbrouwer. AI is often presented as a way to make lawyers more productive. "But if you look at our history with IT applications such as e-discovery (a tool used to search digital documents for evidence in criminal proceedings), it turns out that this has not actually made us more productive. The fact that AI makes it easier to draft documents may even lead to more documents being produced than necessary, placing an even greater burden on the system," Hoppenbrouwer explains. Zweistra and Hoppenbrouwer also warn of the risk of ending up in a vicious AI cycle: "If AI is used for tasks that lawyers are already good at, their skills may deteriorate over time, creating a situation in which the use of AI ultimately causes lawyers to rely on AI even more."
AI is also widely seen as an important opportunity to improve access to justice. "We do see that potential, but we are also cautious," says Zweistra. "Due to an increase in meritless objections, fabricated evidence and false reports, issues that our contacts within the police and the judiciary are deeply concerned about, the system may actually become more strained." At the same time, he adds: "We do see opportunities for AI to contribute to fairer and more reliable judicial decisions, which could in turn enhance legal certainty and equality before the law."
Why should lawyers be particularly critical of AI right now?
Zweistra and Hoppenbrouwer highlight three reasons. "People are afraid of missing out. This fear is increasingly fuelled by the industry and by providers of legal AI. We see a risk that legal practice will feel compelled to enter the AI race without reflecting on the essential question: what problems do I actually have, and can AI really solve them?" Moreover, the introduction of AI leads to new forms of dependency that, according to Zweistra and Hoppenbrouwer, sit uneasily with the core values of independence and integrity in legal practice. "In addition, AI is a major driver of water consumption, electricity use and the depletion of scarce raw materials. AI is also often trained on copyrighted material without the required permissions. As a result, the current use of AI sits poorly with the core value of integrity and with the societal role of lawyers," Hoppenbrouwer explains.
Is faster really better in legal practice?
"Faster can be better if, for example, it leads to more cases being resolved, which could improve access to justice overall. At the same time, speed is not one of the most pressing problems in legal practice. Careful and fair adjudication of cases is a more urgent concern, and the question is whether AI actually contributes to that," say Zweistra and Hoppenbrouwer. "The use of an LLM to draft the reasoning of a judicial decision in order to speed up the process deprives the judge of the time needed to reflect on their own judgment. Access to justice is not only about the number of cases that are resolved. The problem also lies in legal alienation, in feelings of powerlessness, even once the courtroom has been reached, and in not feeling seen or heard. Time for dialogue and critical reflection is essential in legal practice," Hoppenbrouwer adds.
When lawyers wish to deploy AI, Zweistra and Hoppenbrouwer suggest drawing inspiration from a design-based approach. “This approach places the analysis of the problem at the forefront and can help prevent lawyers from ending up with AI systems they do not need and/or that do not work. To make this approach very concrete, lawyers could work through a checklist”:
- What problem does it solve, and is AI the appropriate means to address this specific problem?
- Is the problem clear and well-defined, and can it be attributed to a specific person or role?
- What opportunity can the use of AI unlock, and why is AI the appropriate means to realise that opportunity?
- How can long-term, effective use of AI be achieved?
- To what extent does the introduction of AI alter the organisation’s internal and external relationships in undesirable ways?
- To what extent is it possible to develop AI that is tailored to the organisation’s specific needs, circumstances and stakeholders?
Should lawyers jump on the AI bandwagon?
"We believe that lawyers do not need to embrace AI unless it clearly adds value. At the same time, we recognise a market reality in which falling behind is hardly an option. That is precisely the greatest risk: that legal practice will adopt something it does not truly need, while at the same time introducing a host of new problems and dependencies in its wake," Zweistra and Hoppenbrouwer argue.
Although the use of AI in legal practice may offer benefits, they stress that it must be done carefully and, above all, in a way that aligns with the real problems lawyers face in practice. "Do not be afraid to go against the tide. It is crucial to think carefully about AI, about what it does and how it can be used, but only from the perspective of a specific question or problem that precedes the AI solution. Otherwise, you may quickly find yourself with a solution to a problem that does not exist."
- Assistant professor
- PhD student
- More information
The book Sneller & Beter? is available here (in Dutch).
You can also listen to the Sneller & Beter? podcast on Spotify (in Dutch).
