GAI cannot take humans out of the equation
There can be many benefits to incorporating generative artificial intelligence in teaching. However, GAI can be both incorrect and biased, which is why it is important that both teaching staff and students learn to use the technology correctly and in the right context.
With the release of ChatGPT in November 2022, higher education institutions have been debating about how to incorporate Generative Artificial Intelligence (GAI) into education programmes. The easy access to GAI such as ChatGPT or Microsoft Copilot, poses opportunities but also challenges to teachers and higher education institutions.
“A major advantage, for example, is that GAI allows teachers to create more personalised educational content that would otherwise be too time-consuming. They can, for example, use GAI for learning analytics to then generate alternative examples, tasks or additional material tailored to students’ specific learning needs. A more personalised learning experience can improve students’ engagement and lead to deeper learning,” argues Christopher Neil Prilop who is an associate professor for applied learning technology at the CED.
“However, GAI also entail risks and challenges. For instance, GAI such as ChatGPT or Microsoft Copilot present their output with confidence although the information might be incorrect or biased. Hence, students must have sufficient domain knowledge to evaluate the output. Furthermore, the openly available Large Language Models are driven by commercial interests, which raises concerns about data privacy and security,“ he elaborates.
GAI demands AI literacy
Higher education teachers need to be able to navigate the potentials and challenges of using GAI in their teaching. First and foremost, teachers need to understand the technology as well as its limitations:
“That includes students’ misconceptions. A study conducted by DJØF last year showed that a large number of students perceived GAI, like ChatGPT, as a search engine. This is exactly what GAI should not be used for as the chatbot generates output based on a statistical model. To put it bluntly, ChatGPT is a guessing machine that might present incorrect or biased information,” Tom Gislev Kjærsgaard emphasizes.
Tom Gislev Kjærsgaard is a development consultant at the CED. Together with Christian Winther Bech, who is an IT coordinator at the CED, he has held professional development courses about GAI for more than 600 members of staff at AU.
”In the beginning, some teachers perceived GAI as a threat to education. Now they are seeing the potentials of GAI in relation to their educational practices,” Christian Winther Bech says.
He goes on to explain that to meaningfully implement GAI in teaching, teachers need to consider how they can mitigate possible risks such as students relying too much on GAI-generated answers. Both teachers and students require AI literacy to be able to make informed decisions.
Using GAI in feedback processes
A part of pedagogical AI literacy is to assess whether GAI tools can enhance students’ learning experience:
“GAI tools are exactly just that – tools – that have the potential to enhance students’ learning experience but might also entail detrimental effects. As with all educational technology, using GAI does not necessarily lead to learning. To enhance students’ learning experience, it is crucial to purposefully incorporate GAI in the learning process,” Christopher Neil Prilop points out.
One of Christopher Neil Prilop’s main research fields is feedback in digital environments. Feedback has been shown to be one of the most effective interventions that can be offered to students in higher education. However, providing students with high-quality feedback is demanding:
“Feedback processes are extremely time-consuming and add to the high workload of university teachers. That is why expert feedback by a teacher is only provided to a limited extent in higher education. Accordingly, there are already several research projects investigating to what extent GAI can be of assistance in providing feedback,” Christopher Neil Prilop says.
He adds that research shows that feedback provided by a GAI can outperform expert feedback on a surface level and that it is actually perceived as more trustworthy by students than expert feedback. Though these findings seem to offer a way to provide extensive feedback to the students, Christopher Neil Prilop has a different take on it:
“To me, these findings are actually more alarming than promising. Feedback processes are complex interactions. GAI does not perceive deeper meaning or context. That means the feedback will always be on a superficial, surface-structure level. GAI cannot take interpersonal factors into account. This can lead to feedback that does not take feedback recipients’ cognitive, meta-cognitive, or motivational disposition into account, which can entail limited feedback uptake.”
Although the higher level of trust in GAI feedback might be due to a novelty effect, it could also be based on students’ misconceptions of the technology. They might believe the information provided by GAI is 100% correct:
“If we think a machine can do the same or a better job than a human, it devalues feedback. As a result, students will place less value on feedback processes. We know that providing and receiving feedback makes students engage more with tasks and leads to deeper learning. So ‘outsourcing’ feedback, can be detrimental to students’ learning process. Consequently, GAI should be used to support and enhance feedback processes, not to take humans out of the equation,” Christopher Neil Prilop concludes.