One teacher’s battle with deepfake learning | WORLD
Logo
Sound journalism, grounded in facts and Biblical truth | Donate

Deepfake learning

One teacher’s struggle with artificial intelligence


Francesco Carta Fotografo / Getty Images

Deepfake learning
You have {{ remainingArticles }} free {{ counterWords }} remaining. You've read all of your free articles.

Full access isn’t far.

We can’t release more of our sound journalism without a subscription, but we can make it easy for you to come aboard.

Get started for as low as $3.99 per month.

Current WORLD subscribers can log in to access content. Just go to "SIGN IN" at the top right.

LET'S GO

Already a member? Sign in.

Editor’s note: We agreed to keep the writer anonymous to protect her students’ privacy.

I was in the middle of my first year of teaching seventh grade when, in the midst of grading essays, I realized one of my students had cheated. But not in the usual way, like stealing another student’s essay or parental “ghostwriting.” No, this student, whose grade was tanking in the fourth quarter, who was barely completing her homework, and who regularly failed her vocabulary quizzes, turned in a flawless essay with high-level words like inadvertently and transcend.

Her name was Amanda. Normally, I would be thrilled, thinking that somehow she had experienced a miraculous breakthrough. But in this case, I was immediately filled with dread.

When I pulled her aside, she was not able to explain her thesis statement or main points, or even tell me much of what the essay said. It was obvious she didn’t write it, but who did? After talking to other teachers and doing some research, I found the culprit: ChatGPT.

When you are a first-year teacher, the majority of advice you get is something like “Don’t be a perfectionist, just get through the year.” But most first-year obstacles do not involve going head-to-head with a highly complex computational network with a never-­before-seen ability to generate blocks of text that are so close to human-generated, they are described as “intelligent.”

It was a daunting realization.

After only a few months, there were multiple cases in our middle school of students who were failing their classes yet turned in spotless essays they couldn’t account for. All of the faculty and administration were scrambling to understand how to deal with artificial intelligence. The challenges were wide ranging, beginning with how to define it and ending with how to prevent students from abusing it.

At first, I mostly worried about students using AI to cheat. Now that I’ve had a few years to reflect, my concerns go much deeper than that.

My students are learning rudimentary communication skills in a world where technology can do it for them. And now the skill of communication is itself becoming less to do with humans and more delegated to technology. That’s probably why at least once a year a student complains to me: “Why do I have to learn to write when AI can just do it for me?”

My answer—“Because it’s an important skill to learn”—is not very persuasive.

IN THE WAKE of ChatGPT’s release in November 2022, our school and others stumbled as we tried to keep up with the pace of AI development. My principal invited me to participate in a committee tasked with updating our region’s policy. The first meeting was not what I expected.

The educators around the conference table taught a wide range of subjects. We began by introducing ourselves and explaining why we thought we were there.

One ex-tech entrepreneur took a swig from his can of Starbucks Double-Shot Espresso and spoke excitedly about how our schools must incorporate AI as soon as possible. He said there was so much money pouring into the technology, our students could either be part of the future or get left behind.

A young, philosophically minded man across the table shot down that idea. He had the opposite approach to AI: If it had to be involved in education, it should be handled cautiously, with a lot of restrictions.

The meeting eventually devolved into a back-and-forth between AI enthusiasts and skeptics, with one commonality: We were more afraid of what it was taking away than what it was adding to our students’ educational experience.

I left that first meeting feeling like we’d missed the real challenge. We needed to consider not just how to stop students from using AI to cheat but why they were using it to begin with.

One of my students last year was a headstrong, bilingual kid from a single-­parent household. Naturally, Arianna had fewer support systems available to her and struggled with English as her second language. But she was passionate and eager to improve.

At the beginning of the year, I asked students to present a book they had read over the summer. Arianna chose Little Women. I knew how much she dreaded public speaking and was prepared for tears and protests. I was pleasantly surprised when she brought a giant poster board with beautifully drawn characters and detailed descriptions.

She stood in front of the class and confidently delivered a speech about Little Women that sounded like something a literary scholar could have written. Her notes contained descriptions such as: “Little Women is a coming-of-age novel that follows the lives of the four March sisters. … The story focuses on their hopes, struggles, and dreams as they move from childhood into adulthood.”

It was obviously not her writing.

I pulled her aside later to ask her more about the novel, including other characters and scenes from the book. She couldn’t describe any specific scenes or details beyond what was written on the poster board. Later, I put a prompt into ChatGPT. It spit out her presentation, word for word.

I was beyond frustrated. I couldn’t understand why she would cheat on something that was supposed to be an easy, check-the-box kind of assignment. So I asked her why she would turn to AI in the first place when I knew she was capable of writing a solid presentation. Her answer still haunts me.

She was anxious about what her peers would think of her and wanted to look smart in front of them. She didn’t think she could create a solid presentation, so she had AI make one for her.

Arianna experienced the self-doubt and insecurity that is normal for her age and situation. But instead of gaining confidence through creating a less-than-perfect presentation, she turned to AI and preferred to risk being caught.

UNFORTUNATELY, STUDENTS like Arianna are becoming more and more common. AI is affecting them in fundamental ways, a consideration missing from the overarching narrative on tech development. We hear horror stories about AI deepfakes and chatbots gone rogue, but sometimes the real horror story is much simpler than that. It’s the easy application of AI—essentially autofill on steroids—undermining the reality that “intelligence” is necessarily human.

Most of the discussion about AI in education seems to miss the obvious: that the goal of education is to produce an educated human being. It’s about the process, not generating another essay. But students, and too many parents, are oblivious to the truth that learning and growth require struggle and effort. One of my colleagues who teaches second grade told me one of her students refused to learn to write for an entire year. He told her that AI would do it for him.

Worse, even students who probably do understand, on some level, that using AI undermines their own development are using it anyway. Since 2022, I have seen more and more students struggle with the pressure of being perfect and in turn risk steep consequences to get a high grade.

Another one of my eighth graders, Julian, struggled with panic attacks throughout the spring. He faced a lot of pressure at home and among his peers to do well and was taking advanced math and Latin. As he prepared to apply for high school and aimed to place into honors classes, he couldn’t afford to let his grades drop.

At the end of the last month of school, he found himself in a predicament: He needed to study for the Algebra Exemption Exam but also write a literary analysis essay I had assigned on a novel we read in class. He felt buried and felt he had to deliver perfection, so he turned to AI.

About 99% of the essay he turned in was written by ChatGPT, with some wording tweaked here and there to make it look authentic.

I will never forget pulling him aside in the hallway to talk about it.

“I know you used AI to write this essay. I have this report that proves it,” I told him. “You will receive a 50% on this essay, but what I care most about is you knowing how serious this is.” I told him he was getting off easy because he was still in middle school. But if he didn’t get his act together, I warned, this would cost him a lot in high school and even more in college.

He looked at me blankly, as if my cautionary words had gone straight over his head. He nodded and shrugged it off, and then walked away.

COMMENT BELOW

Please wait while we load the latest comments...

Comments