Is ChatGPT giving students an artificial education? | WORLD
Logo
Sound journalism, grounded in facts and Biblical truth | Donate

ChatGPA

AI is forcing teachers and students to redefine education


Portishead1 / Getty Images

ChatGPA
You have {{ remainingArticles }} free {{ counterWords }} remaining. You've read all of your free articles.

Full access isn’t far.

We can’t release more of our sound journalism without a subscription, but we can make it easy for you to come aboard.

Get started for as low as $3.99 per month.

Current WORLD subscribers can log in to access content. Just go to "SIGN IN" at the top right.

LET'S GO

Already a member? Sign in.

Emily Flaming started at John Brown University in fall 2022—the same year ChatGPT burst on the scene. Lots of people were talking about the AI chatbot, but Flaming didn’t know how—or if—she should be using it.

“It felt like a very scary thing,” she said. Flaming, who is working on her education degree, didn’t really understand the program, and she didn’t want to use it the wrong way.

Now, 3½ years later, Flaming is among the first generation of young teachers receiving their diplomas in a world where generative AI tools are fast becoming near-ubiquitous.

The moment feels symbolic—a point of no return for educators who have spent the last few years scrambling to keep up with an ever-expanding universe of labor-saving tools while playing cat-and-mouse with plagiarizing students. While some hail AI tech as a revolutionary key to learning—opening the door to more tailored and accessible strategies—others argue tools like ChatGPT are eroding students’ capacities to think critically and pursue truth.

It’s a crisis point challenging cultural assumptions about the purpose of education and spurring Christian teachers to carefully consider what it means to be human. They’re striving to help students catch a different vision for learning—one focused more on who young people are becoming than what they can produce.

Marisa Shuman, a teacher at the Young Women’s Leadership School of the Bronx in New York, generated a lesson plan using ChatGPT to examine its potential usefulness and pitfalls and to get her students to evaluate its effectiveness and think critically about AI.

Marisa Shuman, a teacher at the Young Women’s Leadership School of the Bronx in New York, generated a lesson plan using ChatGPT to examine its potential usefulness and pitfalls and to get her students to evaluate its effectiveness and think critically about AI. Hiroko Masuike / The New York Times / Redux

A FEW YEARS AGO, Flaming noticed a student in one of her online classes posting unusually long, detailed responses on a discussion board. The writing seemed completely different than the person’s normal style. On a hunch, Flaming copied-and-pasted her classmate’s responses into an online AI detector.

It was a 100% match.

OpenAI reports that one-third of college-aged Americans now use ChatGPT—with more than a quarter of their applications centering on learning, tutoring, and schoolwork. The most commonly reported uses are starting papers, summarizing texts, and brainstorming ideas. But about 30% of respondents admitted to using the chatbot for essay drafting, and over a quarter used it for exam answers.

Flaming said she’s witnessed a broad spectrum in how her peers approach AI. Some seem to use ChatGPT to “basically get their degree for them.” Others refuse to touch the chatbot at all because they’re afraid of getting flagged for plagiarism.

The modern field of artificial intelligence has existed since the 1950s. But AI models like ChatGPT pack a new kind of horsepower—distilling galaxies of data into simple prose in mere seconds. It’s a development some experts celebrate as the ultimate democratization of learning: instant answers for everyone.

But Flaming wasn’t enthusiastic about AI—at least, not before her sophomore year of college. That’s when one of her professors created an assignment asking students to experiment with AI. He encouraged Flaming to think of AI as a tool, and to learn to use it with discernment.

After that, Flaming discovered lots of helpful ways to use ChatGPT: lesson planning, anticipating student questions, and making rubrics. She also learned about plenty of other AI sites specifically designed to help teachers cater to their students’ individual learning styles and interests. Unlike ChatGPT, these AI apps show students problem-solving steps instead of just giving them answers.

“I think a lot of teachers view it almost like having a teacher per student,” Flaming said. That could be a game changer in public school classrooms where teachers have to divide their attention among 20 or more pupils.

Flaming is optimistic about these applications, but she sees some definite downsides, too. When she first started exploring ChatGPT, Flaming quickly realized how easy it was to outsource her thinking to AI. It’s a temptation she knows her future students will also face.

“Kids are smart,” Flaming said. “They’re techy, and they are going to figure out pretty quick how to use AI to just give them the answers if that’s something that they have access to or are allowed to do.”

Students at the Young Women’s Leadership School of the Bronx work on a lesson plan generated by ChatGPT.

Students at the Young Women’s Leadership School of the Bronx work on a lesson plan generated by ChatGPT. Hiroko Masuike / The New York Times / Redux

COMING UP WITH practical classroom applications for AI is one thing. But the very existence of the technology raises some fundamental questions about what it means to be human. Are students’ creative endeavors still worthwhile if AI models can do the same things—just better?

Josh Brake is an assistant professor of engineering at Harvey Mudd College. Last fall, Brake uploaded a screenshot of his first class assignment to ChatGPT and asked the chatbot to write a program for him. Within 30 seconds, the chatbot spit out a flawless snippet of code. The sequence was as good as anything Brake could have written himself.

Brake said the moment highlighted the existential crisis his students are currently facing. But he said fears about AI superintelligence flow largely from a basic misunderstanding about what AI actually is. The whole concept of artificial intelligence is actually a misnomer, because these programs aren’t really able to “think” in the way humans can.

Brake said large language models (LLMs) like ChatGPT are essentially a super-sophisticated auto-complete. These algorithms have seen enough variations of the English language that they can usually guess at the right words and syntax to use. But what’s missing is the semantics—the meaning behind those words. In other words, ChatGPT creates the illusion of intelligent thought, but there’s actually nothing going on under the hood.

That distinction might seem like splitting hairs, but Brake argues it makes all the difference between humans and computers. On day one, he started his class addressing the elephant in the room. “Listen, ChatGPT can do this thing for you,” he told his students, indicating his class assignments. “But what you will lose when you do that is tremendous.”

Technology is designed to cut down on friction, Brake said. But friction and struggle are essential parts of the learning process. Students who use AI to cut corners never actually learn how to do things for themselves. Because of that, Brake argued AI doesn’t actually democratize expertise—just the appearance of expertise.

Education is not about information transfer. It’s more about whole person transformation.

Jason Thacker is a Boyce College professor who directs the research institute of the Ethics and Religious Liberty Commission. He’s a leading thinker helping teachers consider how to approach AI from a robust Biblical worldview. Thacker said the AI revolution is a needed gut check, calling educators back to the heart of a Christian vision of education.

“It’s not about information transfer,” Thacker said. “It’s more about whole person transformation.” For believers, that means becoming more like Christ.

Technology isn’t just a neutral tool, Thacker argued. Its prejudice is always toward efficiency—an aim that often clashes with the goal of spiritual formation. Because of that, Thacker encourages users to peel back the layers and ask themselves how AI products mold their habits and character: “They’re shaping and forming us, often in very subtle ways that we have to slow down to recognize.”

FOR SOME TEACHERS, rethinking technology means steering clear of AI altogether. Brad East is a theology professor at Abilene Christian University, and you won’t find any screens in his classes. Not because East thinks they’re fundamentally wrong—but because he wants to cut down on distractions and create space for students to think deeply about life’s most important questions.

Except in cases of academic accommodations, East doesn’t allow laptops, tablets, or phones. In his upper-level classes, he often doesn’t even use PowerPoint. Instead, he scribbles lecture notes on a whiteboard. It’s a teaching philosophy East labels the “Luddite Pedagogy.”

He’s also traded typed writing assignments for handwritten in-class essays. East asks students to buy physical textbooks and annotate them to show their thought processes. He knows there will always be a few students who just jot notes in the margins to try to convince him they did the reading. But others leave detailed arrows and scribbles and exclamations:

“Absolutely not!”

“Oh, gosh.”

“I can’t believe I’ve never heard this before.”

That’s exactly what East wants to see: students engaging thoughtfully with a text. He recognizes most of his students won’t become theologians or pastors. They’ll probably go on to get white-collar office jobs. But, along the way, East thinks, it’s crucial for them to wrestle with questions about God and humanity.

These kinds of moral questions aren’t ones chatbots can answer. And the stakes are high. East believes theology—what he calls “getting God right”—is actually the most important thing students could ever think about. Every semester, he makes that case to his students.

East is sure some students roll their eyes at his no-tech policy. But most seem to buy into the idea. He reads their anonymous comments at the end of every class, and they don’t hate him for it.

Often, they thank him.

Students tell him they struggle to focus in other classes, and they enjoy having time to settle their brains on things that matter.

East doesn’t have a problem with other professors incorporating AI into their teaching strategies in creative and thoughtful ways. He has colleagues who do that. He’s just trying to do something different—and he rebels at the notion that AI-powered education technology, or “ed tech,” is a must.

According to research group Market.US, the global AI ed tech market is on track to reach $92 billion by 2033—a projected annual growth rate of almost 40%. East said educators should have a healthy dose of skepticism about AI sales pitches from multimillion-dollar corporations hoping to land school system contracts. “They obviously have an incentive to sell us the moon,” he said.

A 2024 Microsoft and LinkedIn report found about 70% of employers would rather hire a less experienced applicant with AI skills than a more experienced candidate without them. But, East isn’t worried about his students falling behind. For the most part, AI sites are so intuitive and user-friendly that people can quickly learn them: “Any of my children could use them if they wanted to.”

Employers also want workers who can think critically, communicate, and problem-solve—skills that aren’t so easily acquired. “ChatGPT cannot give you those any more than a robot that picks you up and runs a mile can develop your mile-running muscles for you,” East said. “You have to do the running.”

IT’S ONLY BEEN THREE YEARS since ChatGPT’s debut, and most educators are still in the throes of figuring out what to do with rapidly proliferating AI technology. Lynn Swaner is president of the U.S. branch of Cardus, a Christian think tank based in Canada. In 2023, she spearheaded a study exploring how Christian K-12 schools are navigating AI and found over 80% of surveyed staff believed AI would significantly alter teaching and learning.

But what exactly this transformation should look like is a lot less clear.

Human beings are made in the image of God. But Swaner said AI tools are made “in our image.” And that means they have the capacity for both good or evil. Swaner said we need robust moral frameworks guiding AI use, but there isn’t yet a consensus on what these should be.

Amy Flagler is director of teacher education at Montreat College, a private Christian school outside Asheville, N.C. She’s currently on a committee hammering out the school’s AI best practices. Flagler said the issue tends to spark a lot of fears for people. There are a lot of unknowns, and for many professors it can feel like just one more task to juggle.

Attitudes toward AI also tend to vary widely between departments. Humanities professors often express worries about things like plagiarism, while business professors tend to be more optimistic about the implications for design and efficiency. Because of that, Flagler said the school is planning to emphasize general principles like integrity and trust instead of aiming for a one-size-fits-all rule set.

In her own classroom, Flagler does her best to help her students—future teachers—think through the pros and cons of using AI in their own classrooms someday. When ChatGPT first launched, she demonstrated for her students how the chatbot could whip up a lesson plan from scratch.

“What do you think?” Flagler asked them.

They were blown away. Some thought it was fine for teachers to use AI in that way. Others were more skeptical.

Flagler encourages her students to be wary of AI while they are still developing their own proficiencies. They don’t yet have the expertise to spot gaps and shortfalls in AI-generated outputs. Maybe it could be a helpful tool after they’ve built up their skill set. But not before.

Most of all, Flagler encourages her students to be honest and up-front about how they’re using AI. She tells them to just ask for clarification if they aren’t sure where the ethical lines are. And she pledges to be transparent with her own AI use, too. For her, it all boils down to one main question: “Am I using [AI] in a way that increases trust between me and my students?”

ChatGPT cannot give you [critical thinking skills] any more than a robot that picks you up and runs a mile can develop your mile-running muscles for you. You have to do the running.

EARLY DATA SUGGESTS more and more children are discovering AI at a young age—and figuring out how to hack the system and get ChatGPT to do their work for them. Already, a quarter of kids surveyed by the U.K.-based nonprofit Internet Matters admitted to using AI tools for homework help. And as many as 4 in 10 of them said they had experimented with some kind of generative AI.

These younger students—the ones still learning how to learn—are the demographic Cardus researcher Lynn Swaner worries about most when it comes to AI in education. They are still developing good study habits and writing skills, and she wonders how exposure to generative AI will change the way they mature.

These are also the students John Brown University student Emily Flaming is most excited to work with. She’s heading into a semester of student teaching and hopes to teach fourth and fifth graders in the public school system after that.

Flaming is well aware many of these 9- to 11-year-olds are already experimenting with AI at home. She’s eager to get into the classroom, but she expects it will take a lot of trial and error to figure out the best way to use AI to support—rather than replace—students’ own learning: “I don’t want to take away from what their brains can do.”


Grace Snell

Grace is a staff writer at WORLD and a graduate of the World Journalism Institute.

COMMENT BELOW

Please wait while we load the latest comments...

Comments