Reclaiming our humanity
If we allow AI to organize and develop our ideas, we’ll lose the ability to be thinking human beings
miniseries / E+ via Getty Images

Full access isn’t far.
We can’t release more of our sound journalism without a subscription, but we can make it easy for you to come aboard.
Get started for as low as $3.99 per month.
Current WORLD subscribers can log in to access content. Just go to "SIGN IN" at the top right.
LET'S GOAlready a member? Sign in.
The AI revolution is upon us, and its opening salvos have struck in my own world of higher education. A recent study suggests that 86 percent of students are using AI to complete their university assignments. One student at Columbia University boasted that AI has written about 80 percent of his college essays (including his application essay), because, to his mind, “most assignments in college are just not relevant.” So, why bother with attending college at all, let alone an Ivy League institution? “It’s the best place to meet your co-founder and your wife,” he replied.
This mercenary approach to higher education isn’t isolated and isn’t going away, but, at the risk of sounding downright quaint in our technology-obsessed culture, it threatens more than the enforcement of university policies on academic integrity. It calls into question what it means to be a human at all.
As a professor, I remember where I was when I first heard about ChatGPT almost three years ago. It was one of those moments when you knew nothing could ever be the same again. Initially, student submissions dependent upon the large language model (LLM) were somewhat easy to detect: bloodless and repetitive content coupled with pristine but bland syntax and works cited that quite obviously do not exist in any scholarly book or journal. But eventually the bots got more sophisticated as the technology developed. Students can even prompt the chatbot to introduce a few grammar mistakes to make the end-product seem more authentic.
At one faculty training on the use of AI last year, a colleague reported some early returns on faculty’s ability to manually detect AI: it was a 50/50 ball, to use a sports analogy. You were as likely to be wrong in your assessment as right, making any kind of punitive action for academic dishonesty difficult to administer.
Many in higher education have chosen an “if you can’t beat them, join them” approach to the technology. Like a parent who reasons that it is better for their teenagers to drink alcohol at home than elsewhere, many professors are surrendering the field just as the battle for academic integrity is beginning. Why not allow and even facilitate the use of AI for developing outlines and ideas and then just encourage the students to make their own contribution to this robotic starting point? Isn’t resistance to AI just like Luddite reactions in the past to other technological advances: the use of writing, the printing press, the computer, or the internet? We’re all using AI without necessarily knowing it anyway when we do a Google search or use spell-check or use predictive text in an email. So, why not seek to corral the technology and simply delimit its legitimate use?
Some of these objections seem reasonable. And I acknowledge that some fields of study might be more conducive to the use of AI than others (engineering, architecture, medical technology, etc.). But when it comes to the humanities—literature, history, the arts, and my own areas of religion and philosophy—AI functions as an intractable barrier to the educational process. To state my objection succinctly, in the humanities, the process is the point. Pragmatically, if you allow students to develop their paper outlines using AI, they will just write the whole thing (or nearly the whole thing) using AI. But more substantively, if they outsource the early stages of the learning process to a robot, they will be less likely to develop the skills they need for the later stages.
If we think in terms of Bloom’s taxonomy, a well-known framework for the hierarchy of cognitive skills, if we short-circuit the lower domains (remembering, understanding, and applying), students will be less likely to progress adequately to the higher domains (analyzing, evaluating, and creating). Again, the process is the point. Chatbots aren’t conscious agents (nor can they ever be). But humans are. Learning to collect, comprehend, and organize information with willful intention and trial and error is a massive part of what makes us thinking human beings. Without these early steps, higher level thinking will become nearly impossible or at the very least, utterly uninteresting, as the Columbia student articulated it.
In short, my thesis is simple: Make the humanities human again. Perhaps the computer scientists or engineers can make use of AI without much loss. But the humanities require a much more severe self-limitation. Blaise Pascal, the famed 17th century philosopher and mathematician, once wrote that the “sole cause of our unhappiness is that we do not know how to stay quietly in our room.” Studying the great books, asking the enduring questions of human existence, contemplating the divine—these activities can only take place when we shut our doors and close our lids (both computational and, perhaps, ocular as well) and learn to stay quietly in our room.

These daily articles have become part of my steady diet. —Barbara
Sign up to receive the WORLD Opinions email newsletter each weekday for sound commentary from trusted voices.Read the Latest from WORLD Opinions
Candice Watters | Orchid helps parents “detect risk” for disease among embryos—in order to kill them
Joe Rigney | A challenge from former county clerk Kim Davis could break the original link in a chain of judicial tyranny
Maria Baer | The poisoning of a woman’s drink with the abortion pill is not far out of line with the abortion giant’s philosophy
Lael Weinberger | Cutting language programs will not depoliticize higher education
Please wait while we load the latest comments...
Comments
Please register, subscribe, or log in to comment on this article.