Teachers consider ChatGPT’s place in the classroom
Educators wrestle with how to respond to artificial intelligence programs in school
Jason Thacker begins each new semester by explaining his expectations and class policies to his philosophy, ethics, and worldview students at Boyce College in Louisville, Ky. This semester, he had a new topic to cover.
Thacker added a section about artificial intelligence and automatic text generating programs to the plagiarism and cheating policies on his syllabus. Today’s AI programs can answer test questions, write essays, and figure out math problems in seconds. Thacker, who is also the chair of research in technology ethics for the Southern Baptist Convention’s Ethics and Religious Liberty Commission, said new technology doesn’t change the rules about cheating.
“We need to say, full stop, that any attempt to pass off work of someone or something else’s as your own is deceptive,” he said. But Thacker also realizes that artificial intelligence is here to stay, even in the classroom.
“In certain situations, it would be beneficial to ban a technology like this, but reality is you’re not going to ever truly ban it,” he said. “You’re never going to be able to keep students from using it because if you banned it on school computers, the next thing you know they’re using their smartphone tablets or they’re using it at home.”
As artificial intelligence, or AI, programs become widely available, educators are weighing their potential risk for cheating against their value for teaching critical thinking skills.
On Nov. 30, the artificial intelligence research company OpenAI launched ChatGPT, an AI chatbot program that can answer conversational questions, create computer code, and write in a variety of styles. OpenAI programmed the free software using data collected from books, internet articles, and human feedback.
“You have a technology that can write songs and things that historically only humans have ever been able to do,” said Luke Phillips, executive director of marketing and enrollment at Pepperdine’s Graduate School of Education and Psychology. “Even its first iteration was, I think, far superior to other other AI solutions.”
As ChatGPT gained users that tested its ability to write essays, generate tests, and solve complex equations, educators began raising concerns about cheating. In early January, New York City schools blocked access to the ChatGPT website on school devices and networks. Universities in Australia returned to handwritten exams after students were caught using AI to write essays. Colleges across the United States are scrambling to adjust curriculum and academic integrity policies to account for the use of AI tools.
Though ChatGPT can generate responses to prompts that theoretically could pass as student work, many education professionals say the program still lacks the ability to create unique perspectives beyond compiling information. A Princeton University student created the app GPTZero to detect whether an essay was written by a human or the AI tool, alleviating some concerns about cheating.
Kyle Kellogg, a high school chemistry teacher in San Antonio, said teachers should focus more on guiding students to make honest choices than simply removing opportunities to cheat.
“It’s not new for students to try to gain an edge,” Kellogg said. “Teaching is such a people business. It’s not a technology business. It’s a relationship between teachers and students and trying to convince students that what you’re selling is worth doing the right way.”
Because AI tools like ChatGPT compile information from online sources and are programmed by people, individual biases can creep into the technology. Thacker and Kellogg said teachers can use AI to teach students about information sourcing and how to think critically about the information they read.
“Some of the current limitations and kind of drawbacks to a system like this is it doesn’t show you where the information came from,” Thacker said. “They’re not just good or bad, but they’re also not neutral, and they’re distinctly shaping the way we view the world.”
As educators wrestle with how to instruct students about the ethical concerns surrounding AI programs, some teachers are utilizing the same tools to help build curricula, automate tasks, and plan classroom exercises.
Brian Stiles, a high school journalism and media teacher in Blythewood, S.C., said he uses ChatGPT to generate writing prompts and ideas for his students. “It can spit out all kinds of really generalized ideas for stories that they can tell, which is a great starting point for a lot of kids, especially the ones that struggle with coming up with creative approaches,” he said.
Stiles and Phillips agree that educators and institutions should not shy away from teaching students about new technology.
“We as educators are educating students about how to be prepared for a post-2023 world where these things are realities,” Phillips said. “I think we do all of us a disservice if we don’t incorporate this kind of learning in the classroom.”
Ultimately, Thacker said, the deeper question is not just about whether students should have access to AI technology or not. Rather, educators must guide students to question technology’s influence, think critically, and develop a desire to learn and not just consume.
“We assume that technology is a tool that we use for good or for bad, but reality is that technology is also changing us,” he said. He hopes Christians approach conversations about technology by focusing on its effect on the whole person. “It’s shaping us in many ways, shaping our understanding of God, shaping our understanding of ourselves, shaping our understanding of the world around us and how we interact with one another.”
If you enjoyed this article and would like to support WORLD's brand of Biblically sound journalism, click here.