The launch of the Erasmian Language Model

Join us to learn more about the first generative AI developed at EUR
Erasmian Laguage Model - CLI
Erasmian Laguage Model - CLI

The Erasmian Language Model (ELM) had its launch event on 9 October. Professors, staff, and students, as well as external parties like government agencies and startups, joined the developing team of ELM to learn more about the generative AI model. How was it developed, what can it do, and how can we use it?

Michele Murgia - CLI
Michele Murgia

Introducing ELM: the need for an EUR-based AI language model

The idea behind the Erasmian Language Model (ELM) came about roughly six months ago, explains Evert Stamhuis, Professor of Law and Innovation and Academic Lead for AI Convergence at EUR. In one of the Minor programmes available at the EUR, AI and Societal Impact, students learn what AI models are, how they work, and what potential problems and improvements can appear. To the students involved with the minor, issues such as data privacy of closed-source models linked to a central, remote server, as well as the environmental impact of the data used, became apparent. “Another issue that is present is the inherent biases that generative AI models possess, such as racist and sexist biases that can be seen in programmes such as ChatGPT”, explain Michele Murgia, project lead of ELM as well as coordinator of the minor.

Erasmian Language Model - CLI

What is ELM?

ELM is a generative AI model, developed and based at EUR. Different from other publicly available AI models, ELM is a software that is downloaded onto the hard disc of the computer you are using to access it, unlike a remote server-based AI model. This is done to help with privacy concerns, as well as reduce environmental concerns by keeping the model small, suited for academic research and education specifically. It is a true open-source model (you have insight into both the model and the data), and it is trained in both English and Dutch. This aims to curb certain anglophone biases that other models that only run in English may develop.

How was ELM developed?

The process behind the development of ELM was a three-step process. Firstly, LLM (Large Language Model) pre-training was needed for the model. This was done with the help of the University Library, through the publicly available repository of all Master theses and publications that have been published at EUR and Erasmus MC. Thus, at its core, ELM has been trained on the academic research that has been conducted at the EUR. The second step of the process is supervised fine-tuning, in which the programme is given specific examples and prompts in order to complete a certain request. The current version of ELM does not have a chat interface, such as other AI like ChatGPT, but that may be a possibility in its future. Thirdly and finally, reinforcement learning from human feedback was used to train the programme to distinguish between good and bad results that it would generate.

Erasmian Language Model - CLI

Currently, there are two versions of ELM, ELM Small and the full-sized ELM. Of these, ELM Small will be continued to be developed, as the full-sized ELM is trained on Llama-2, the generative AI belonging to Google- which is not fully open-source, and thus would inhibit the wish of making ELM a community-based model. ELM Small is the version intended to be downloaded onto personal use laptops, as it can run on most hard discs of laptops, being only 1.8 GB in storage. This version uses 160 million parameters and allows the user full control of editing the programme. This includes adding training material to enhance the model, as well as supervised fine-tuning and reinforcement learning opportunities. This is the version of ELM that students in the minor AI and Societal Impact have been testing, adding to, and improving.

João Goncalves - CLI
João Goncalves

For whom is the ELM?

“We want this model to be efficient and serve our intended purposes at EUR, thus, we have made the deliberate choice to keep it a smaller model”, explains João Goncalves, academic lead of ELM and teacher in the minor AI and Societal Impact. Both Michele and João emphasized that ELM is not a traditional model, but a community-based model- the end users who it is designed for, everyone at EUR, are also the co-creators of ELM. Thus, anyone who uses ELM can have a direct impact in shaping the model for their specific academic needs. “The success of the model relies on you, the users, becoming its co-creators”, states João.

“The success of the model relies on the users, becoming its co-creators”

What are the next steps for ELM?

Towards the end of the event, it was clear that many students and staff were enthusiastic about the project and asked relevant questions. Topics ranged from departmental bias within research, as well as the programme being able to trace where information comes from- not possible due to the sheer nature of LLMs-, and whether it can acknowledge its own limits in what it can and cannot answer based on the information it is trained on.

The next step for ELM is its continued co-creation. The call for anyone who is interested in using and developing ELM is open! ELM can be accessed from this link.

To help the development of ELM, the team behind it is searching for people interested in the project. Contributions can range from providing information to train the model with, provide examples and prompts for supervised fine tuning, or just giving general feedback about the model. If you are interested, please contact Michele Murgia  or João Goncalves.

Related links
More CLI news

Compare @count study programme

  • @title

    • Duration: @duration
Compare study programmes