A group of faculty from my institution gathered earlier this month to discuss the presence of artificial intelligence within higher education, more specifically at our university. There was the usual hand-wringing of the prevalence of ChatGPT-generated text in student work, which is the most immediate concern of many of us who try to assess student performance in the classroom. Often, in discussions such as these, the question arises of how to best give exams or papers in such a way that students are not able to use AI-powered applications. And that is a worthwhile question. But as we talked, the conversation quickly turned more existential. What are we doing here?
Why precisely is it “bad” for students to use Chat-GPT in course assignments? The way that one answers that question says a lot about what they think concerning the purpose of education itself, particularly a liberal arts education. Is the purpose of education to memorize facts? If that is the case, internet searches and other storehouses of information have made that a less pressing need. Is it to access a particular canon of texts? The internet has made texts far more accessible (not to mention the negotiation of our canons). Is it to progress in our learning to read and write? A bevy of internet tools can help us with that, or even do it for us.
I am not here claiming that the internet’s ability to retrieve information, access texts, or help us communicate is an adequate replacement for educational institutions. But the fact that it is replacing the traditional roles of teachers more and more in the everyday parlance of life is causing us to reflect on the precise role of education. This reflection is even more pressing when we mix in the increasing role of AI.
A recent editorial in the New York Times recommended that teachers should “assume that 100 percent of their students are using ChatGPT and other generative A.I. tools on every assignment, in every subject, unless they’re being physically supervised inside a school building.” Students at elite schools have reported wide usage of ChatGPT, even if it is against class rules. While it may have been true in the recent past to detect if a piece of academic writing has been written by a student or a chatbot, and may even still be true in some disciplines, it is getting harder and harder to do so. Eventually, it will have the same rate of detectability as when students hire proxies to write their material. And AI detector programs are notoriously unreliable. We can, indeed, lament the fact that students are willing to cheat, as has always been, and we can hope that our educational institutions focus more on virtue development. But the fact will remain that students will use these new and growingly-ubiquitous AI resources.
Yet … what if we embrace the fact that our students are using AI? If it is essential that we assess the level of our students’ own writing – for instance, in writing classes – we can have them synchronously write in Bluebooks in class or take proctored exams. But, for classwork that is done asynchronously, it would be wise for us to assume that students are using AI as a tool to help them in their research, their writing, and their analysis.
I try to think of ChatGPT or other generative AI platforms as providers of short, rather general, and highly customized encyclopedia articles. Imagine if the World Books of my youth had entries as specific as the questions we now type into ChatGPT. My guess is that we would have seen that as an entirely legitimate resource. Of course, one of the differences is that encyclopedia articles and their authors have gone through peer review – much like Wikipedia – and can be credited in a footnote. But ChatGPT is generated on the spot by an algorithm written by a team of software engineers, and what ChatGPT provides is usually passed off as a student’s own words or ideas. Nevertheless, ChatGPT-generated material can be used as a resource; hopefully our students’ education will help them to discern whether or not that resource or the information it provides is reliable, or if the writing is worthwhile. Of course, our assessment of their work will also tell them that.
But these sorts of evaluations come only when we have in our heads an idea of the purpose of education. And those ideas are different for different fields of inquiry. I teach theology. If AI tools can help my students distill the main ideas of a particular theologian, remind them of the year of a particular council, or survey different global hermeneutics of a particular piece of Scripture, then that is great. What a wonderful resource! But that is not all theology is. Wrestling with a difficult text, sitting with concepts and allowing them to roll around in your mind, engaging in a community of interpretation, learning the practices of spiritual discernment, opening oneself to new and strange perspectives – these are things that ChatGPT cannot do for us. It is a tool. It is a resource for use in our formation. It is not formation itself.
The presence of AI in our lives is not going away. It will only become more prevalent in the apps we use, the news we read, the cars we drive, and even the ways that we communicate. As with most technologies, this will have unexpected consequences, both positive and negative, but the rate and scope at which AI is becoming part of our everyday lives suggests that these consequences will be significant. One of the most significant ways, I think, that AI is affecting our lives is that it is causing us to ask really basic questions which we have taken for granted, at least for a very long time. What does it mean to be a person? What is a good life? What is work? And, for our purposes here, what is education? At the very, very least, AI has been a boon to education in that it is forcing us to ask such questions.