Undoubtedly LLM (large language models), and in particular ChatGPT, is the hot topic in education right now. David Hopkins has helpfully started and shared a flipgrid where he is sharing articles around generative AI, and I know many others are doing the same. Amongst the hype there are thankfully a growing body of people who are writing informed critiques. In this post I just want to quickly highlight a couple of publications that I think are a must read.
Firstly the UNESCO Quick Start Guide to ChatGPT and Artificial Intelligence. This is provides a really good overview of issues including a useful flow chart to help decisions around using ChatGPT, applications for education and some of the current issues. I suspect this will become a “go to” resource. It’s something that all educators should read.
And once they’ve done that then I have to recommend 2 longer pieces by Helen Beetham. Firstly, “on language, language models and writing“. In this essay, Helen really gets to grips with a key issue that is missing in many of the articles about LLM and ChatGPT, that is what is the purpose of writing? Why do we do it? It’s not just about structuring of text, personal reading. I think most people (well at least you, dear reader) does now understand that these language models work on prediction, and have no sense of context. So although the text may read well, it will often lack purpose and understanding. As Helen points out ” Writing by human writers is not only about the world, it is of the world and accountable in it.”
She goes on to explore some of the potential benefits of using systems such as ChatGPT. Can they be seen as writing partners? We supply the prompts, they supply the text . . ? I was struck by this.
“The illusion that these are more than tools or interfaces – that they are our partners in language, our interlocutors. We already spend large parts of our lives engaged in vivid graphical and sensory illusions. We should count the costs and benefits before rushing into a life of dialogue with illusory others”
And this
“Students see writing as a diverse, messy, inexact, variously motivated practice they are developing for themselves. Then perhaps they can aspire to be a writer among writers, and not a human version of ChatGPT.
I thank Helen for being the writer she is to have come up with that last turn of phrase. And then she goes on to point out:
But tools are not neutral. Just as language is not ‘simply’ the words we use to express our meanings to other people, tools are not ‘simply’ the means we use for exercising our personal intentions in the world. Tools carry the history of how they were designed and made. They shape practices and contexts and possible futures. . . With so many other tools we can use creatively, we must surely weigh the risks against the creative possibilities.”
In terms of education Helen also raises some really valid points for strategic leadership in universities. It does seem an awful lot of responsibility is being heaped on students, maybe we need to be asking these questions
“While students are held stringently to account for their use of LLMs, how will universities account to students for their own use of these systems? Can they hold out against black-box capabilities being embedded into the platforms they have come to depend on? Who is assessing the risks, and how are those risk assessments and mitigations being shared with the people most affected? These are questions that universities should be attending to with at least as much energy as they are policing students’ use of apps.”
There is also an accompanying piece students assignments in a time of language modelling. Again this is a really thoughtful (and pragmatic) piece about why, how and when to use writing tasks in assessments.
I would thoroughly recommend reading both essays, and engaging with Helen’s writing over on substack.