Dialogue and trust pave the way for good AI practice
At the Centre for Language Generation and AI, they do research on language models and their creative and educational potential. The CED has spoken with Rebekah Baglini, Associate Professor of Linguistics and Co-Director at the centre. She presents her recommendations on how to deal with AI from a university pedagogical perspective.
The Centre for Language Generation and AI (CLAI) was established in 2023. At the centre, they research in foundational language models. Foundational language models are models that are trained on large amounts of text data without specific specialisation within a particular task.
At CLAI, the aim is to explore the implications of these models for research on cognition and language, ethical issues, and the creative and pedagogical potentials of the technology.
CLAI is located at the School of Communication and Culture and has researchers from fields such as linguistics, cognitive science, literature, and philosophy, to study the new and rapidly evolving technology.
Summer University as a natural laboratory
In the summer of 2023, Rebekah Baglini and three colleagues organised the summer course 'Artificial intelligence and co-creativity' for first-year students at Arts. The course aimed to give students an understanding of the principles and history behind the so-called Large Language Models (LLM), such as ChatGPT.
In addition, the goal was for students to gain experience using LLMs as a tool to do projects together and to help them with their writing process.
“The course functioned as a natural laboratory for us to explore the potentials and pitfalls of teaching ‘with’ generative AI. It became a space where we could explore the pervasive concerns about student misuse of artificial intelligence,” Rebekah Baglini says.
Recommendations for responsible use of AI
A survey conducted by Djøf in the autumn of 2023 revealed that more than half of the Djøf members who are students have used generative AI writing assistants during their studies. But it is still a new world, and it can be hard to keep up:
“It is still a challenge for universities to properly prepare students to utilise the new technologies effectively while maintaining academic integrity,” Rebekah Baglini says.
Therefore, she points to some recommendations based on the experiences from the summer course and her research:
“The recommendations highlight the importance of engaging students in open dialogues about the learning objectives, and co-defining responsible AI use in context-specific settings,” Rebekah Baglini explains.
Among other things, she emphasises that LLM policies cannot be a one-size-fits-all solution:
“What is legitimate and illegitimate will vary across disciplines and contexts. In a Technical Sciences course, it might be more acceptable to use an LLM as a writing assistant for a formulaic research report, but not in a writing-focused humanities course,” she explains.
She believes that faculties need to be clear in their study regulations and learning objectives about what is considered to be acceptable and ethical use of LLMs. She recommends that regulating the use of LLMs should be founded on communication and trust, explaining that the software that supposedly can detect AI-generated text is not reliable:
“The software can place students in a role of adversaries and have them be falsely accused of cheating. As a result, many American universities have begun reversing their adoption of these detection tools. Even OpenAI, the maker of ChatGPT, shut down its detection classifier due to its ‘low rate of accuracy’,” Rebekah Baglini says.
Instead, she recommends empowering students to have agency in their educational practice and learning, rather than creating an adversarial relationship based on mistrust between teachers and students.
A key skill in the future labour market
“Students will increasingly be expected to have proficiency in using LLM tools in the labour market, which makes prompting of LLMs an important skill,” Rebekah Baglini says.
That is why she recommends that all students learn to use it – not just students in IT-related degree programmes:
“Prompting LLMs is a skill that must be learnt. The students need to learn ‘computational thinking’. You can develop LLM literacy within any field of study,” Rebekah Baglini explains.
Resources for educators
There are many resources for teachers seeking to make constructive use of LLMs in the classroom. Rebekah Baglini offers some suggestions for further reading:
- MLA working group on writing and AI: Exploring AI Pedagogy: A Community Collection of Teaching Reflections
- Free resources to explore and utilise ChatGPT and AI: Get an educator-focused approach to information, concerns, and uses for these powerful tools
- CLAI Reading Group: https://cc.au.dk/en/clai/events/artikel/clai-reading-group
Help is also available at the CED, including the 'Generative AI in teaching' course and the AU Educate page on chatbots.